• Decrease font size
  • Return font size to normal
  • Increase font size
U.S. Department of Health and Human Services

Medical Devices

  • Print
  • Share
  • E-mail

Transcript for Public Workshop - Bridging the IDEAL and TPLC Approaches for Evidence Development for Surgical Medical Devices and Procedures, December 2, 2011

UNITED STATES OF AMERICA

DEPARTMENT OF HEALTH AND HUMAN SERVICES

FOOD AND DRUG ADMINISTRATION

+ + +

CENTER FOR DEVICES AND RADIOLOGICAL HEALTH

+ + +

BRIDGING THE IDEAL AND TPLC APPROACHES FOR EVIDENCE DEVELOPMENT FOR SURGICAL MEDICAL DEVICES AND PROCEDURES

+ + +

December 2, 2011

8:15 a.m.

FDA White Oak Campus

10903 New Hampshire Avenue

The Great Room (Room 1503)

White Oak Conference Center, Building 31

Silver Spring, Maryland 20993

FDA: JEFFREY SHUREN, M.D., J.D.

Director, FDA/CDRH

PARTICIPANTS

BINITA ASHAR, M.D., M.B.A., FACS

Office of Device Evaluation

FDA/CDRH

JEFFREY BARKUN, M.D.

McGill University

ABRAM BARTH, M.P.H., J.D.

Office of the Chief Counsel

FDA

ISABELLE BOUTRON, M.D., Ph.D.

Hôtel Dieu Hospital;

Assistance Publique Hôpitaux de Paris;

University Paris, Descartes, France

BRUCE CAMPBELL, M.S., FRCP, FRCS

Interventional Procedures and Medical Technologies Advisory Committees, National Institute for Health and Clinical Excellence (NICE)

GREGORY CAMPBELL, Ph.D.

Director, Division of Biostatistics

FDA/CDRH

PIERRE-ALAIN CLAVIEN, M.D., Ph.D.

Group Leader, Division of Visceral & Transplant Surgery

University Hospital of Zurich

TAMMY CLIFFORD, Ph.D., M.Sc.(A)

Chief Scientist, Canadian Agency for Drugs and Technologies in Health

JONATHAN COOK, Ph.D.

Health Services Research Unit

University of Aberdeen, U.K.

PHILIPP DAHM, M.D., M.H.Sc.

Professor of Urology and Residency Program Director, University of

Florida, College of Medicine, Gainesville, FL

PHILIP DESJARDINS, J.D.

Associate Director for Policy

FDA/CDRH

KAY DICKERSIN, M.A., Ph.D.

Professor, Johns Hopkins Bloomberg School of Public Health

Director, Center for Clinical Trials

Director, U.S. Cochrane Center

MARKUS DIENER, M.D.

Medical Director, Study Centre of the German Surgical Society (SDGC)

University of Heidelberg, Germany

PATRICK ERGINA, M.D., M.P.H., M.Sc., FRCS(C), FACS

Cardiac Surgeon, McGill University

SHAMIRAM FEINGLASS, M.D., M.P.H.

Vice President for Global Medical and Regulatory Affairs

Zimmer, Inc.

STEPHEN GRAVES, M.B.B.S., D. Phil., FRACS, F.A., Orth.A.

Professor, Flinders University in Adelaide, South Australia

Director, Australian Orthopaedic Association National Joint Replacement Registry

THOMAS GROSS, M.D., M.P.H.

Acting Director, Office of Surveillance and Biometrics

FDA

TRISH GROVES, M.B.B.S., M.R.C.Psych.

Deputy Editor, British Medical Journal

Editor-in-Chief, British Medical Journal Open

BARRETT HAIK, M.D.

Member, Board of Regents

American College of Surgeons Advisory Council for Ophthalmic Surgery

JUDITH HARGREAVES, RN, M.Sc.

Associate Director, Global Health Economics and Outcomes Research

ETHICON, Inc. Johnson & Johnson

CARL HENEGHAN, M.A., MRCGP

Clinical Reader and Director of the Centre for Evidence-Based Medicine

University of Oxford

RICHARD E. KUNTZ, M.D., M.Sc.

Sr. Vice President and Chief Scientific, Clinical and Regulatory Officer

Medtronic, Inc.

RICHARD LILFORD, M.B., B.Ch., Ph.D., FFPHM, DRCOG, FRCP

Vice Dean for Clinical Research; Professor of Clinical Epidemiology

University of Birmingham, U.K.

SUSANNE LUDGATE, FRCR, FRACR

Medical Director, Medicines and Healthcare Products Regulatory Agency

U.K.

WILLIAM MAISEL, M.D., M.P.H.

Deputy Director for Science

FDA/CDRH

DANICA MARINAC-DABIC, M.D., Ph.D.

Director, Division of Epidemiology

FDA/CDRH/OSB

PETER McCULLOCH, M.D., FRCSed, FRCS (Glas)

University of Oxford

TOSHIA MIYATA, M.D.

Ministry of Health, Labour and Welfare, Japan

NEIL OGDEN, M.S.

Branch Chief, General Surgery Devices

FDA

ANITA RAYNER, M.P.H.

Associate Director, Policy and Communications

FDA/CDRH/OSB

RITA REDBERG, M.D., M.Sc.

University of California, San Francisco

JAMA and Archives

MARY BETH RITCHEY, Ph.D.

Associate Director, Postmarket Surveillance Studies

Division of Epidemiology

FDA/CDRH/OSB

JYME SCHAFER, M.D.

Director, Division of Medical and Surgical Services

Coverage and Analysis Group

Center for Medicare and Medicaid Services

ART SEDRAKYAN, M.D., Ph.D.

Weill Cornell Medical College

BILL SUMMERSKILL, M.B.B.S., M.Sc.

Senior Executive Editor, The Lancet

JAN P. VANDENBROUCKE, Ph.D., M.Sc.

Clinical Epidemiologist, Leiden University, The Netherlands

MARLENA VEGA, Ph.D.

CEO, SobreVivir / A Will to Live

RON YUSTEIN, M.D.

Acting Deputy Office Director

FDA/CDRH/OSB

BRAM D. ZUCKERMAN, M.D., FACC

Director, Division of Cardiovascular Devices

FDA/CDRH/ODE


INDEX

PAGE

OPENING REMARKS - Jeffrey Shuren, M.D., J.D 7

SETTING THE STAGE INTRODUCTORY SESSION

Moderators: Bram D. Zuckerman, M.D., and Mary Beth Ritchey, Ph.D.

INTRODUCTIONS - Danica Marinac-Dabic, M.D., Ph.D.

GOALS FOR THE DAY - Peter McCulloch, M.D., and Art Sedrakyan, M.D., Ph.D.

The Challenges of Medical Device & Surgical Procedure Regulation - William Maisel, M.D., M.P.H.

conceptual framework for evidence evaluation for devices & procedures: Art Sedrakyan, M.D., Ph.D.

IDEAL: WHAT IS IT, AND WHY IS IT RELEVANT? - Peter McCulloch, M.D.

THINK TANK DISCUSSION: LEVERAGING THE ACCUMULATED EVIDENCE - Facilitator: Carl Heneghan, M.A., MRCGP

OVERVIEW OF IDEAL FRAMEWORK - Jeff Barkun, M.D.

PREMARKET EVIDENCE GENERATION AND EVALUATION FOR SURGICAL MEDICAL DEVICES

Moderators: Binita Ashar, M.D., M.B.A., and Neil Ogden, M.S.

FDA'S PREMARKET EVALUATION: METHODOLOGICAL OPPORTUNITIES OF THE NEW FDA INITIATIVES - Gregory Campbell, Ph.D.

THE IDEAL RECOMMENDATIONS FOR EARLY PHASE STUDIES: REGISTERS FOR FIRST-IN-MAN; PROSPECTIVE STUDY DESIGN AND REPORTING RECOMMENDATIONS FOR EARLY PHASE STUDIES; PROSPECTIVE COLLABORATIVE DATABASES - Peter McCulloch, M.D.

THINK TANK DISCUSSION: OPTIMIZING AN INTEGRATED TOTAL PRODUCT LIFE CYCLE FOR DEVICES AND PROCEDURES - Facilitators: Anita Rayner, M.P.H., and Abram Barth, M.P.H., J.D.

CONDITIONAL APPROVAL (LICENSING WITH EVALUATION) - SIMILARITIES AND DIFFERENCES FROM POST-APPROVAL STUDY REQUIREMENTS

Moderators: Jyme Schafer, M.D., and Ron Yustein, M.D.

CURRENT PRACTICE IN U.S. - Mary Beth Ritchey, Ph.D., and Jyme Schaefer, M.D.

CURRENT PRACTICE IN E.U. - Bruce Campbell, M.S., FRCP, FRCS

THINK TANK DISCUSSION: WHAT IS THE POTENTIAL SCOPE OF L&E? - Facilitators: Trish Groves, M.B.B.S., M.R.C.Psych., Rita Redberg, M.D., M.Sc., and Philip Desjardins, J.D.

INNOVATIVE APPROACHES FOR POSTMARKET EVALUATION: AT THE CUTTING EDGE

Moderator: Thomas Gross, M.D., M.P.H.

CONTEMPORARY POSTMARKET FOR DEVICES - MDEpiNet - Danica Marinac-Dabic, M.D., Ph.D.

IDEAL RECOMMENDATIONS FOR STAGE 4 - Jonathan Cook, Ph.D.

REPORT FROM SMALL GROUP DISCUSSIONS

ADJOURN

M E E T I N G

(8:10 a.m.)

D R. SHUREN: All right. Good morning. I'm Jeff Shuren. I'm the Director of FDA Center for Devices and Radiological Health or CDRH. And I'd like to welcome you all to today's workshop. As you know, this is FDA's Bridging the IDEAL and TPLC Approaches for Evidence Development for Surgical Medical Devices and Procedures, Public Workshop, and I would dare anyone to say that ten times fast. We will work on shorter titles in the future.

We are very fortunate that so many of you have shown such strong willingness to invest your time, energy, and expertise towards the progression of this workshop. Your participation and valuable contribution will enable us to hold an intellectually engaging and highly productive meeting.

The purpose of this public workshop is to facilitate a discussion among FDA, other national and international governing bodies, academia, physicians, and other key stakeholders in the scientific community to further refine and advance the FDA's TPLC, Total Product Life Cycle, and IDEAL, Idea Development Exploration, Assessment, and Long-Term Study, frameworks related to the evidence generation and evaluations for devices and surgical procedures. We share a deep interest in advancing the infrastructure and methodology for evaluating surgical devices and procedures while generating the best available scientific evidence on medical devices to protect and promote the public health.

The IDEAL network, a group of methodologists and clinicians, grew from a series of conferences at Balliol College, Oxford from 2007 to 2009. The need for IDEAL came from not adequately addressing in conventional studies factors that can affect outcomes related to surgery, such as learning curves and surgeon preferences. Thus far, IDEAL has developed a distinct framework for the assessment of surgical practices. Because the development and assessment of some medical devices can depend upon the surgical procedures and practices necessary for their use, the FDA CDRH became interested in exploring how the conceptual framework that IDEAL developed could be applied to the assessment of medical devices that depend on surgical procedures.

This workshop's findings have the potential to inform FDA policy regarding the appropriate methods for evaluating the safety and effectiveness of medical devices that depend on surgical procedures. We hope that this workshop serves as a launching point for our collaboration and look forward to continuing our work together to advance our shared objectives. So, I'd like to thank you now for your participation, and I greatly look forward to the outcome of today's workshop. Thank you very much.

(Applause.)

DR. MARINAC-DABIC: Good morning and welcome to the FDA, to this very exciting workshop. I would like to thank you all for coming, and thank you for sharing your ideas and expertise in preparation for this conference. And particularly, I'd like to thank our colleagues from University of Oxford and Cornell University for working very closely with the FDA staff to make this workshop happen.

As you know, there had been almost a year since our last meeting that we held in Oxford, and many things have happened during this last year. On one side, you know, IDEAL has certainly progressed with some new ideas put on the table. And at the same time, FDA embarked on a number of new initiatives both in premarket and postmarket. There have been some of them already issued as draft guidances. We will be discussing them, and this is also a good opportunity for us to provide feedback and give formal comments to those.

So, this is going to be a very productive day, and I would like not only to thank all of you for coming, but also for the members of the public who also are coming and help us have richer and broader discussion, giving perspective from different stakeholders.

I would like to introduce now the two moderators of this first session: Dr. Bram Zuckerman, who is the Director of the Cardiovascular Division in our Office of Device Evaluation. And Dr. Zuckerman sits over there, and we're trying to make this conference really very interactive, so that's why we don't have special seats for the moderators. Also, Dr. Mary Beth Ritchey, who is Associate Director in the Division of the Epidemiology for Postmarket Surveillance.

I would like also to start with asking all of you to very briefly tell us your name and also tell us your affiliation and your area of expertise. So, we can go quickly around the table to set the stage for the upcoming session. Maybe we can start with Dr. Peter McCulloch.

DR. McCULLOCH: Good morning. I'm Peter McCulloch. I'm an academic surgeon at the University of Oxford and the director or coordinator, I suppose you could call it, of the IDEAL collaboration.

DR. SEDRAKYAN: I'm Art Sedrakyan at the Weill Cornell Medical College. I'm an associate professor and a former surgeon, now health services researcher, and a director of Patient-Centered Comparative Effectiveness Research Program.

DR. GROVES: I'm Trish Groves. I'm from the BMJ, which is the British Medical Journal. I'm Deputy Editor there and in charge of the research, and also work on research methods and reporting; very interested in IDEAL.

DR. HENEGHAN: Hi. Good morning, everybody. My name is Carl Heneghan. I'm Director of the Centre for Evidence-Based Medicine in the University of Oxford, clinical epidemiology, standard GP. And I've worked closely with the BMJ actually looking at the evidence base around medical devices.

DR. LILFORD: Good morning, all. My name is Richard Lilford. I'm an obstetrician-gynecologist is my clinical specialty, but like Art, I no longer practice that. I'm Vice Dean for Clinical Research in the University of Birmingham and professor of clinical epidemiology, and my big interests are service -- research devices, especially health economics for devices, and research methodology.

DR. GRAVES: I'm Stephen Graves. I'm professor of arthroplasty at Flinders University in Adelaide, South Australia. I'm also director of the Australian Orthopaedic Association National Joint Replacement Registry and was involved in this development and have continued to manage that registry since it was started back in 1999. I'm also a member of numerous government committees in Australia involved in regulation and also pricing of medical devices.

DR. GROSS: Good morning. My name is Tom Gross, and I'm Acting Director of the Office of Surveillance and Biometrics here at FDA. My background is as a pediatrician, epidemiologist, and I'm currently a bureaucrat.

DR. VEGA: Good morning. I'm the lady in red. My name is Marlena Vega. I'm here by the good graces and -- of Danica. I'm in real life as a gynecologist. I'm no longer in practice, but I'm here as a patient advocate, so I'm changing hats. And I'm a cancer survivor three times and fourth generation. So, I'm here to listen and to thank you for -- having me here sincerely for patients.

DR. LUDGATE: Hold it down. Okay.

UNIDENTIFIED SPEAKER: She's on two of my committees. She can never do that --

DR. LUDGATE: Good morning. I'm Susanne Ludgate. I'm the Medical Director of the Medicines and Healthcare Products Regulatory Agency -- sorry it's such a mouthful -- which is the equivalent of the FDA in the U.K.

DR. BRUCE CAMPBELL: I'm Bruce Campbell. I'm a vascular surgeon in Exeter in the U.K. I chair two separate advisory committees for NICE. One is the Interventional Procedures Advisory Committee, which I've done since 2002, and the other is the Medical Technologies Advisory Committee since 2009. And I've been intimately involved with the setup and the development of those two NICE programs.

DR. BARKUN: My name is Jeffrey Barkun. I'm a pancreas and transplant surgeon from McGill. I've been on our Technology Assessment Committees for Canada and for Québec for about 15 years actually, and I'm also a member of the Balliol collaboration that published the IDEAL papers.

DR. CLAVIEN: I'm sorry -- just sorry. Okay. I'm Pierre Clavien. I'm a surgeon from Switzerland, trained in Canada, was for several years at Duke University. Now, I'm the head of surgery in Zurich, Switzerland and a long-time interest in outcome research, and I'm also a member of the council of the Swiss National Foundation in Switzerland.

DR. RITCHEY: Good morning. I'm Mary Beth Ritchey. I'm the Associate Director for Postmarket Surveillance Studies in the Division of Epidemiology in the Office of Surveillance and Biometrics here at FDA.

DR. CLIFFORD: Good morning, everyone. I'm Tammy Clifford. I'm Chief Scientist at the Canadian Agency for Drugs and Technologies in Health, another mouthful. We are Canada's health technology assessment agency, and in doing that we do assessments of medical devices and surgical procedures. I'm an epidemiologist by training.

DR. KUNTZ: Good morning. I'm Rick Kuntz. I'm the Chief Scientific Officer, Medtronic device company, and I'm a cardiologist with a research interest in clinical trial methodology.

DR. YUSTEIN: And I'm Ron Yustein. I'm the Acting Deputy Office Director for the Office of Surveillance and Biometrics here at CDRH, and I'm a gastroenterologist and hepatologist by training.

DR. DICKERSIN: My name is Kay Dickersin. I'm an epidemiologist at Johns Hopkins Bloomberg School of Public Health. I'm Director of the U.S. Cochrane Center and also Director of the Center for Clinical Trials at Johns Hopkins.

DR. COOK: I'm Jonathan Cook. I'm a methodologist based at the University of Aberdeen, Scotland, U.K. I've been involved with IDEAL group and before, the Balliol collaboration. I'm interested in surgical evaluation and surgical trials.

DR. BOUTRON: Good morning. My name is Isabelle Boutron. I am an associate professor in clinical epidemiology in the University Paris Descartes in Paris and also involved in the French Cochrane Center. And I'm particularly interested in the methodological issue when assessing surgical procedure.

DR. DIENER: Good morning, everybody. My name is Markus Diener. I'm a surgeon from the University of Heidelberg and also Medical Director of the Study Centre of the German Surgical Society. Our main task is actually doing multicenter surgical trials and also surgical meter analysis.

DR. MIYATA: Good morning. My name is Toshia Miyata. I'm from Japan. I am a Deputy Director of Licensing and Regulation of -- and medical device. Today, I am very -- am looking forward to a fruitful discussion. Thank you.

DR. ERGINA: I'm Pat Ergina. I'm a cardiac surgeon at McGill University, and I've been part of the Balliol, the IDEAL collaboration, from the beginning. I have a side interest in surgical study designs.

DR. VANDENBROUCKE: Okay. I have to continue pushing while speaking. Okay. That's a good system.

(Laughter.)

DR. VANDENBROUCKE: My name is Jan Vandenbroucke. I was trained as a general internist, became an epidemiologist. I'm a clinical epidemiologist at Leiden University in the Netherlands and mainly interested in methods. Thank you.

MS. HARGREAVES: My name is Judith Hargreaves. I'm Associate Director for Global Health Economics and Outcomes Research for Ethicon Johnson & Johnson. I also do some volunteer work for NICE NHS Evidence as an external advisor. I specialize in medical devices, in particularly with hernia surgery and wound closure.

DR. DAHM: My name is Philipp Dahm. I'm an academic urologist at the University of Florida in Gainesville. I have an interest in surgical innovation and devices.

DR. SUMMERSKILL: Hi. I'm Bill Summerskill, Senior Executive Editor of The Lancet in London. I receive the journal's research content and compile the annual surgery issue. I was part of the original Balliol collaboration.

DR. HAIK: I'm Barrett Haik -- (Off microphone.)

DR. BRAM ZUCKERMAN: Thank you. I'm Bram Zuckerman, Director, FDA Division of Cardiovascular Devices, and a cardiologist by training.

DR. GREG CAMPBELL: Hi. I'm Greg Campbell. I'm a statistician by training and the Director of the Division of Biostatistics here in CDRH.

DR. MARINAC-DABIC: Hi. I'm Danica Marinac-Dabic. I am the Director of the Division of Epidemiology here at CDRH, Office of Surveillance and Biometrics. I'm by training an obstetrician and gynecologist and epidemiologist.

DR. RITCHEY: So, next on the agenda we're going to go through the goals for the day, and this is a joint presentation by Peter and Art.

DR. SEDRAKYAN: -- started. We really appreciate that you came here. It's a wealth of expertise, but it's hard to get into one room. So, the goal for the day is really to brainstorm here and share as much as we can, talk about these concepts that are going to be presented to you, so that we will have a lively discussion and come up with recommendations at the end and also potential future work that should happen to advance both IDEAL agenda and also regulatory science from FDA point of view.

So, in designing this agenda, we thought about two components for each session, and you know the sessions already. It's in front of you. But we have moderators that will be moderating talks, and then they can entertain questions from the audience if there are questions from the audience. And then facilitators are going to facilitate discussion among the think tank participants, so people around the table basically. So, that's the process for us today. So, we really look forward to the discussion, and we hope it will be -- make the success. And success has a thousand faces, and I think we're here in the room today.

DR. McCULLOCH: Oh, is that working? Yeah.

So, first of all, I don't have too much to say, but I must start by thanking Art and Danica on behalf of everyone for organizing this meeting. This is very much their meeting, and I've essentially been an advisor from afar. So, any success that comes from this has to be attributed entirely to them.

I'd also like to thank them for their vision. They came and helped with the IDEAL collaboration meeting in Oxford last year and immediately saw the connections with their own work on devices which I have to say at the time I didn't, and I think that the opportunity before us is an enormous one. You heard Dr. Shuren saying that we have the opportunity to inform the direction of FDA policy at this, which is a time of major change for them.

This is an outstanding group of people, and I'm very grateful to everyone who's come. We have this great opportunity, and I think we really need to take hold of it. I hope that the free-form discussions will allow us to reach some kinds of conclusions, and I think tomorrow's session after the examples is going to be important in summarizing those. And I hope as many as possible of you can stay for that.

Having absolved myself of responsibility for the organization, I have two details of that that I feel I ought to pass on to you. First of all, I know that Danica and Art have been working under quite exceptional pressure to get this meeting organized. And as you may be aware, some of your reading material has arrived rather late. If you're going to take a full part in the discussions tomorrow, you'll need to have a look at the examples at some point. So, if you don't read any of the rest of it, please read those.

The second is, and as a Brit and an outsider, I can again absolve myself of any understanding of the principles and rules behind this, but it's apparently out with regulations for dinner to be provided. And we're therefore inviting everybody who would like to join to a dinner this evening, and we will be circulating a signup sheet. This will be on the familiar Scottish basis of bring your own wallet.

But thank you very much to everybody again for being here, and I look forward to a very interesting discussion. Thanks.

DR. RITCHEY: As we seem to be about 35 minutes ahead of schedule at this point, I think that it would be good to discuss next the conceptual framework for evidence evaluation for devices and procedures. And in the packet that came this morning, the folder, everyone has a copy of that paper. And Dr. Sedrakyan will walk us through this again as well.

DR. MARINAC-DABIC: I suggest that before we go there, we use the opportunity for any questions that might come up from either the invited participants or the members of the public. And I know there are many familiar faces in the audience. If there are any questions from our colleagues from industry or Agency for Healthcare Research and Quality or others from the FDA that will help shed some light on an interest that you would like us to cover during the day, this is the time to speak because we have some time before Dr. Maisel comes to give his opening remarks from the scientific and policy perspective in more detail.

No questions?

DR. HENEGHAN: Hi. Yeah, sure.

DR. MARINAC-DABIC: Okay.

DR. HENEGHAN: I guess I'm the think tank discussion, so I'm going to come in early, and that's what we do in Oxford.

I think we just need to start with the premise because medical device, surgical device is such a big -- if you like. And I think the scope of this meeting for the next two days, I'd really like to start to think what we think it might be among this group as opposed to it's slightly on the vague side right now to me. And if we can find some clarity, I'll be able to find much more usefulness in pointing in with the discussion. And I don't know whether people think we should talk about this now or at the end of this session, but try and find some real clarity of the scope of the next two days because it is such a big topic.

DR. SEDRAKYAN: That's a fantastic point, and the scope the way we thought about it is really the devices that are part of the surgical procedures, so those devices that enable a surgeon to perform a surgery or implants. So, that potentially should be the focus, general focus for our discussion, anything has to do with the intervention. And I don't really think narrows down enough, Carl, so we initially thought about implants, but then we thought that if say, there are any devices that are part of the say, less invasive surgery that is done through laparoscopic device, should that be part of this discussion? Probably yes. So not really implants, but also devices that enable surgeon to deliver a particular procedure.

DR. GROVES: For instance, not the so-called artificial pancreas, which the FDA released some guidance on just this morning. Not that.

DR. MARINAC-DABIC: I think Art touched upon the scope in terms of what type devices that we might consider for, you know, potential inclusion in this type of initiative. But I think even more importantly, I think it is very important question for us to tackle at the beginning of this workshop: What would be the areas where we potentially see that where the science meets the policy and how some of the preliminary work that had already begun was done here at the FDA and putting together very forward-thinking guidance documents that had been just launched recently in the last couple of months, and some of them even this month, with the IDEAL framework. I think it's up to this group to define and refine the scope, how far we think we would like to go. But from our perspective at the FDA, we see a lot of opportunities for bridging what the IDEAL had already accomplished and where we would like as the Agency to go.

DR. BRUCE CAMPBELL: May I just comment, Art? I mean I do think it's very important that we have some distinction in our mind between devices and procedures. I mean one of the points of IDEAL was and one of the points to one of the programs I chair at NICE was the fact that devices are subject to a regulatory framework, whereas procedures are not. Now, to me, the binding, the glue between them is that the evidence generation issues are just the same. So, on the one hand, we have the scientific thoughts about evidence generation. On the other hand, we have the business of them being completely separate, one being regulated, the other not. And I haven't a solution for that, but I just think not getting confused with that is quite important.

DR. McCULLOCH: I'd like to support that, Bruce. Clearly, if you look at the scope of IDEAL and the things that it could be applied to, it's much, much broader than devices. But this meeting has been brought about because of a specific set of events and specific needs within FDA. So, clearly, our focus is on devices, and as Art was indicating and I think Carl was hinting, very much towards the heavier end of the device spectrum. So Class III devices and perhaps some Class II, and perhaps that's one of the areas that we should be trying to define just now.

DR. HENEGHAN: Can I -- I think that's really important. So, we're not looking at lower-end diagnostic devices. You're saying we're looking at the Class III, the most risky devices, the ones that have the most harm and do require the most evidence of effectiveness or safety if you like. And that does provide clarity immediately even from --

DR. McCULLOCH: I mean I'd very much like someone from the FDA to give a perspective on this, because as I said, the basis for this meeting is their current interest in reforming their own system. So, getting the FDA's perspective on exactly where the focus should be would be very helpful.

DR. MARINAC-DABIC: Maybe I can start, and then I'll invite also my colleagues from the FDA to weigh in, and maybe Dr. Zuckerman in particular, since he chairs the Clinical Trials and Clinical Data Subcommittee of our Science Council and had been really instrumental in pushing forward a number of very forward-thinking guidance documents.

But from our perspective, I do not think that we should limit the discussion only to high-risk or Class III or PMA type of devices, because there are lots of gaps that we've identified here at CDRH and clearly have been put forward within the recent IOM report that you probably already saw and read that have to do with actually 510(k) devices. And I think the surgical context, I think it's equally important in both the 510(k) type of world and the PMA world. And I think, you know, some of the surgical issues that we're going to be discussing today, including the evidence generation, we might find ourselves in a situation that there is a lot of gaps in devices that are on the 510(k) pathway, rather than the ones that are undergoing more scrutiny throughout the premarket review.

DR. BRAM ZUCKERMAN: Okay. Thank you, Danica. Bram Zuckerman, Director, FDA Division of Cardiovascular Devices.

I guess I would agree with everyone that there's a wide spectrum of device challenges and clinical trial opportunities. That's why I let my colleague on the left, Dr. Greg Campbell, Director of the Office of Biostatistics here, solve all the difficult problems.

(Laughter.)

DR. BRAM ZUCKERMAN: But to be succinct, I think the initial comments have reflected two challenges. One is that there are general proof of principle trials that need to be done where there are essential questions in medicine that remain unanswered, and devices and device technology can play an important part in answering those questions. As an example, we would perhaps point to the public access defibrillation trial, a key trial that showed that AEDs in public locations are quite important in saving lives. Now, in that type of trial we were able to employ or incentivize four different device manufacturers to work together in order to what we call raise the level of the water in the dam and really increase medical knowledge and not coincidentally, improve AED sales. And I don't say that facetiously because medical devices have a very important role in treatment of patients.

On the other hand, there are specific venues where certainly a device manufacturer and a regulatory agency need to develop a certain clinical trial that will allow for device approval as well as improving medical knowledge usually within a smaller sphere. So that both types of problems are seen at the agency level, we're interested in developing efficient frameworks for both types of challenges. And again, I think you'll hear from my colleague Dr. Campbell that we're welcome to all types of clinical trial designs as long as at the end of the day, we can cull out reasonable assurance of safety and effectiveness.

I think one challenge though is getting all stakeholders to the table, and that's why I'd like to pass the baton to Dr. Kuntz, who is quite experienced from an industry perspective, and like to hear how we can make sure that this sort of effort provides the appropriate carrots, and hopefully not sticks, to industry to get them to participate to just improve the level of clinical evidence in this field, as well as some of our participants from the public to sound out how we can get better clinical trial participation, especially in this country. So, perhaps Dr. Kuntz can --

DR. KUNTZ: Sure. Thanks, Bram.

Just to continue the conversations you started, there's no question that we all recognize that we need more evidence. And as we bring more stakeholders in, we have more dimensions of evidence that we have to bring in, so it's an infinite amount of data that we need to get to inform all important stakeholders.

The dimensions I'm talking about are that we generally in the device arena have grown up on randomized, controlled studies based on placebo controls. And with an interest in comparative effectiveness research, ultimately the placebo-based control doesn't often compare against the alternative that patients might be interested in; that it is an experiment aimed at highest level validity, but less oriented toward generalizability when we look at some of the stakeholders involved.

And then there's an interest in trying to bring the patient into the equation. The so-called patient-centered outcomes research effort is going on, and that involves more specialized endpoints that reflect patient values and preferences, as well as applications for inference from mean results of studies, to specific patients that may not look like the average patient response. So, this is a high water mark, as you said, to reach. The incentives for industry is that we all need to get better evidence because those involved with choices and those involved with payment all now understand the importance of evidence. So, a good business model is to provide better evidence for all of our devices at every level because that will ultimately be what results in more fair and appropriate distribution of the devices.

I think ultimately, we'll probably move to a more observational-based platform of data and also potentially into an arena where almost all patients involved in receiving care are entered into a research database of some sort. A connection between electronic health records and research is still a big gap, as we know, but because of the -- I think the huge demand for evidence in all of the dimensions I talked about, we really have to capitalize as much as possible in the normal, day-to-day experiences to transform them into research subjects, even at the expense of reduction in validity because the importance of generalizability is so powerful.

DR. GREG CAMPBELL: So, I'm Greg Campbell, and I guess I would probably suggest that we not limit the discussion today to just surgical implants. And part of the reason is that, you know, we could debate about whether interventional cardiologists and when they're implanting things whether that's a surgery or not, and I'm not a surgeon. But the point is the problems are very similar, the problems having to do with the skill of the physician, problems having to do with learning curve. The other issue is sometimes the device is compared to a surgery. So, for example, CABG endarterectomy. And in that case, you know, in the premarket, we're very much interested in the surgical technique as well.

CDRH has a number of surgeons, including Binita Ashar, who is here at the table, and others who are probably in the audience. So we're very interested in surgical trials as they relate to medical devices either for implants that are surgically done or other kinds. And so I look very much forward to the discussion today.

DR. FEINGLASS: So, I'm Shami Feinglass from Zimmer. We're an orthopedic company. And to echo what Richard had said earlier, we're very interested in really changing the face of evidence as it's presented today. I would agree with Richard that people are probably going to move more towards observational studies and looking at maybe a pragmatic clinical trial, something else that moves us past what some of us in the industry see as a traditional drug model that's very hard to apply to devices.

I'd add that one of the reasons industry is very interested, besides agreeing wholeheartedly with Richard that it's the better business model to have better evidence, is that really as you move that bar up, the industry wants something that is transparent and something that is predictable, and they will flock to different study methodologies as those become mainstream. So, whatever this group can do as they come together and help push these things to the mainstream, that will move industry faster forward in that direction because everybody else is doing it. So, many of you may think that's very obvious. It actually isn't.

So, the more this group can get together and define what those appropriate methodologies are behind what needs to be done for surgical devices or trials, that will really help the industry be able to stand in line behind that, deliver the evidence that regulatory bodies and payers likely need, and then be able to get those products to the patients who at the end of the day are the reason that we actually are all here. We want better care for those patients, and this is a way to do that.

DR. MARINAC-DABIC: So, we have time for one more comment, and I see Dr. Maisel is here.

DR. BARKUN: Thank you. These discussions are actually quite similar, for those people who were with the IDEAL group, to what we had before.

Can I propose the following? The first is I think that there are so many similarities, as was discussed, between an interventional cardiologist, gastroenterologist, whatever "ist" that you look -- in surgery that I think that from a methodological perspective, it does make sense to keep them there. There will be a proviso in a second.

The second reason I think is because, as was mentioned there, when you look at the outcomes, which are one of the major issues quite apart from the methodology, I think you have to look at the safety and efficacy, and if we start looking at only one angle of this, so very early on just safety or -- and without efficacy. And we feel -- the people in the IDEAL feel -- that that was one of the issues depending on where you're at.

So, I propose the following, is to look at any device that requires an intervention, to try to focus, however, on diagnostic -- sorry, on therapeutic rather than diagnostic, so diagnostic starts a decision tree which is so far up and has so many branches that we should really try to concentrate on therapy per se. And the final one, which I had found the most useful in the discussion we had before, is to try to get people when they're talking and starting to get very specific about a recommendation or anything you want to say, to give an example what they're talking about. And I understand that we have some discussions, you know, with specific case scenarios. But the fact is if I'm, you know, if I'm giving you a comment, and I'm thinking about, you know, something in GI that's being put on that has marginal safety, and someone else is thinking about a Class III device that, you know, electrocutes the patient and the physician at the same time, I think we need to have some idea. So, I'd ask people to try to say what is it that they're thinking of because it will help the focus of discussion as we go along.

DR. LUDGATE: I wonder if I could just say a few words from a regulator's point of view? Because as you know, we're damned if we do and damned if we don't. We either stop innovation, or if something goes wrong, we haven't regulated it enough. And I think I'd just like to put on the table, as a regulator, the problems that I see from day to day.

First of all is in terms of equivalence, and I don't think we've ever defined something comes along as claimed as equivalence, and I find that a very difficult area. When is something a "me too"; when is it not?

When is it appropriate to do a randomized control trial, and when is it not? And we have to look at that and its limitations.

I think we need to look at the limitations of some trials. For example, orthopedic implants don't go wrong, really wrong, for a number of years. You cannot do trials over that period of time. What do we do in those sort of circumstances?

And lastly, I think we need to put a great deal more stress on postmarket surveillance because particularly with implants, things aren't going to go wrong for a while. And I think we need to look at very well-defined protocols in those areas.

So, as a regulator, these are the areas that I am finding difficult and I'd just like to --

DR. MARINAC-DABIC: Thank you.

DR. HAIK: Just one quick comment. Having been an ophthalmologist for well over 30 years, conflict of interest often plays a major role in the quality of the studies that are done, and many times so many of our physicians receive some sort of remuneration or investigative status or consultant status for items that are going through. Obviously, in orthopedics, we all know that's been a major problem and conflict, in cardiology, in areas also, I mean throughout all of medicine. If you don't deal with physician/surgeon conflict or industry-surgeon interactions -- I'm not saying they shouldn't exist, but they need to be relative -- they're important guidelines to consider.

DR. MARINAC-DABIC: I would like now to introduce Dr. Bill Maisel, who is the Deputy Director of the Center for Devices and Radiological Health and who also serves as a chief scientist at our center, to give his presentation that will touch upon many of the science and policy issues of the things that we're trying to discuss today.

DR. VANDENBROUCKE: A small technical point. Is it possible to move the rostrum forward because it's difficult for this whole row, it's difficult to see the speaker.

DR. MARINAC-DABIC: I think that can be arranged. In fact, move the podium a little bit.

DR. MAISEL: I don't know if we can move the podium, but I'll move. Can people hear me okay if I'm out over here? If you have trouble hearing me, just raise your hand. I'll try to speak loud.

First of all, thank you for the opportunity to speak here. I think it's really a great collection of people, of perspectives, and that's really one of the things as a center we've tried to do is we think about how to regulate some of these products and intersect the products we regulate with the practice of medicine and the care of patients.

What I'd like to try to do over the next 30 minutes or so is just give you some perspective on how we've been thinking about evaluation of medical devices, how we view the intersection with the procedures that are necessary to implant them or the procedures during which these devices are used. I also wanted to provide some insights into how we've been thinking about innovation and product evaluation to try to streamline our ability to get products to market faster, while still assuring that they're safe and do what they claim to do.

As many of you in the room know, I'm sure, we regulate a very broad array of products ranging from simple tongue depressors and stethoscopes to the most complicated devices like inflatable defibrillators and breast implants. We also regulate radiological products. Many of you got here by passing through a product that we regulate, airport screeners. Many of you have products on your waists right now or in your pocket, smartphones which may be turned into medical devices by apps. So, really there's a broad array of products, and these devices really touch everything we do everyday.

On top of the medical device component is the surgical procedure component, and this is data that was published last year, but it's from 2007. It just shows really the vast enormity of the issue that is facing us, 45 million surgical procedures of a wide variety. Many -- all of these procedures involve medical devices. Many of them involve more than one medical device. And so the task before us to try and figure out how do we evaluate the product; how do we separate a product performance from the surgical procedure performance; how do we deal with the complexity of the different skills of the surgeons, the hospitals, the different models of medical devices? It's really obviously very complicated.

One of the other components that is key to this conversation is that devices are not drugs, and it may seem like a very obvious statement, and I'm sure many of you have thought about this, but it really is fundamental to how we think about evidence development and synthesis and making decisions. And the regulatory definition for the U.S. -- for the FDA -- about what are devices -- and I've taken a very long, complicated paragraph and distilled it down to a couple of bullet points -- but basically, a device does not achieve its intended purpose through chemical action on the body, and it's not dependent on being metabolized. So, that leaves a broad definition of what is a device and quite frankly creates some areas that just don't make sense from a regulatory perspective. An example might be sunscreen where you could have a sunscreen that gets metabolized and is a drug, or you could have a sunscreen that's simply a barrier and doesn't get metabolized, and it's a device. So, it doesn't always make perfect sense.

Well, the other fundamental issues between devices and drugs are simply how we think about them, how the product development pathway works, and what it means for us as clinicians and regulators. You know, the rate of technology change for a device is extremely rapid as many of you know. The device life cycle for a company, for a model that comes on and goes off the market is measured in months, not years, as opposed to drug development which takes a very long time. When a device gets on the market, it usually stays on the market for decades. We learn a lot about the device -- I'm sorry, a lot about the drug over that time. We learn about the adverse events. We learn about what happens when a large number of patients gets exposed to it. And that's very different with devices. Oftentimes, by the time we're learning about what the device consequences are over the long term, we're dealing with a new model or a new iteration of the device. And so we have to think about that as we're evaluating the products and as we're planning our Total Product Life Cycle approach. What's the balance for what we need to know before it gets on the market? And we have to prepare for monitoring these devices.

The other key component is that devices can be evaluated on the bench top in the laboratory in a way that drugs cannot. In some cases, you can do more on a bench top than you can in a clinical trial. So, the concept that every device must be evaluated through a long clinical trial, I don't think we subscribe to. And as an example, oftentimes you can test a device on a bench top to failure and learn more about the device that way than you might be able to do in a clinical trial where you might only have one or two failures of a device. So, certainly we can learn a lot by evaluating devices through the testing methodology that we have, and we always try to leverage that.

There are certainly some key, important differences with regard to research of medical devices, such as often they're more commonly reimbursed by CMS during that research development period if they're done under an IDE. As an example, here's a drug, captopril, came on the market in June of 1981, FDA approved. It's the same molecule that's on the market, it's been three decades. There's been millions of patients exposed to it. We know a lot about the drug. We know what happens to people who get it. We pretty much know everything about all the side effects, and that's very different from implantable defibrillators which over this 15- or 20-year period have reduced in size by eightfold, have had an exponential increase in the computer memory. They can do different things. We can communicate with them wirelessly now. It's just a completely different ballgame. And so we need to think about these products differently, and we don't want to be so slow that we have 1990 products in 2011. And so we need to balance the risk and benefit and make reasonable decisions.

Well, the other key component, as you're aware, is just the fundamental difference in evaluating devices versus drugs, and this is where the real challenge comes in, and it gets even more complicated when you fold in the surgical procedures that they're being used in. But obviously, it's sometimes impossible to mask either the patient or the investigator to the device that's being studied. How do you study a total artificial heart without the patient knowing that they've got a huge surgical procedure? Sometimes it's impossible to use a placebo, and if you don't use a placebo, there's a huge placebo effect from implantable devices or other types of devices or surgical procedures. Adverse events are very difficult to classify or attribute as a device-related problem versus a surgical-procedure-related problem. For example, if you have a surgical clip that's being used to prevent bleeding in a surgical procedure, and a patient has bleeding, is that a complication of the procedure, or is that a complication of the device? And sometimes it's extremely hard to sort that out.

The other key component for devices obviously is a learning curve sometimes in the surgical technique, in the procedure, and we need to think thoughtfully about our data analysis and what to expect, and anticipate those learning curves, so that when we're looking at the data, we don't inappropriately stop a therapy from reaching the market because of that learning curve, if the device and the procedure overall might be beneficial.

And fundamentally the challenge with procedures is that when we're trying to evaluate these products, it's not just the device that we're evaluating. We're evaluating the surgical technique. We're evaluating the skill of the surgeon. Sometimes we're evaluating the compliance of the patient with instructions that were given. And so we have to be really thoughtful about designing our studies and collecting the information that we need when we're thinking about these things.

Well, here's an example of the placebo effect of a device. This is from the MIST trial, which was a PFO closure device. A PFO is a hole in the heart that allows in utero oxygenated blood to reach the fetus. In this trial, they were evaluating whether PFOs and PFO closure was associated with migraines and whether closing a PFO could reduce migraines. So, this trial was PFO closure versus a sham procedure where the patient was anesthetized; their groin was instrumented; but no PFO closure was put in. And we won't get into discussing the ethics of doing that kind of study, but what we can see is that in the black, which was the sham procedure, there's a huge placebo effect for this type of subjective endpoint of reduction in migraine, and 25 percent of patients had a 50 percent reduction in their migraine days. Twenty percent almost had a reduction in their headache burden.

So, if we had not had that placebo group and were looking just at the red bars, you'd think you had an incredible product. But by having the placebo arm in this trial, still the product may or may not work. We could debate that. But it certainly gives you a different view of what the data looks like.

Here's an example of a learning curve associated with stereotactic breast biopsy. This is number of biopsies performed and technical success rate, and as you can see, it wasn't until surgeons performed about ten biopsies that they really got good at it, that they sort of maxed out on their skill level. But these are the type of things we need to anticipate as we start evaluating products, when we evaluate products, and particularly when there's different types of surgical or techniques involved.

So the fundamental challenge, if we step back, is how do we facilitate device development and device innovation and get good products to market quickly and safely when science is continually changing; it's moving at a very rapid pace. Devices are getting more complex. And the pathways we use to evaluate these products for us in this country were established 35 years ago or more. And so fundamentally, we've been thinking about what can we do to modernize the way we evaluate products and take advantage of the new technologies that are available to us and new methodologies.

Well, there are an enormous number of exciting, emerging technologies that challenge us. Among them are some listed here, robotics, miniaturized devices, different use environments as patients move from hospitals out into the community and home, really a large array, wireless systems, different ways of communicating. And so our challenge really is to be prepared for these technologies when they hit our door because if they're arriving on our desk, and we start thinking about them then, it's really too late for us to be efficient.

Healthcare delivery has also really revolutionized the care of patients, and there's obviously in the interest of cost savings, a big push to move out of hospitals quicker, so surgical patients are being discharged more quickly. They're going home. There's new monitoring techniques so that patients are being monitored wirelessly. There's mobile communication devices. We have recently issued guidance on mobile apps for smart phones. It's one of the few markets that's doing well in this tough economic times, and when you look at some of the estimates for what's going to be going on in just a few years, there's going to be 500 million smart phone users with some sort of healthcare application.

Now, we're not regulating all healthcare applications. We don't want to. We don't think we need to. We focused on a very, very small section of these health apps, if you will, that we've called mobile medical apps that are devices that turn a smart phone into a medical device such as maybe an app that's a sensor and turns a device into an ultrasound machine or turns it into an EKG machine or apps that turn devices into accessories to medical devices.

And here is an example of one of the mobile medical apps that we've approved. This is for viewing radiology images on a smart phone rather than at a workstation in a hospital. And you could say well, why does FDA need to get involved in something like this? Well, if some physician is on the golf course on the 18th tee, and they're looking at your CAT scan, you want to make sure they can see that small tumor in your lung. And so what we had asked the company to do in this case, was to make sure that the contrast imaging on the device was sufficient to see the types of images and types of abnormalities that a physician and a radiologist would need to see.

The other evolving technology relates to just how these devices communicate. There's been certainly some out there for a while that can wirelessly communicate, but there are really exciting and incredible monitoring devices, whether they're implantable devices to measure some physiologic output, devices at home that can automatically notify physicians of abnormalities with their patients. There's really an exciting array of these devices. But it also really makes us think about, in ways that we haven't had to think before -- how do we ensure the integrity of those signals being transmitted, particularly for life-changing or life-altering signals that a physician might need to receive very reliably?

So, on top of all these technologies that are evolving is the medical device ecosystem, and we talked about differences between drugs and devices. Well, there's a huge difference between what the medical device industry looks like compared to the drug industry, and this is data that's a couple years old, but it's really not any different today. About 70 percent of device companies have fewer than ten employees. I'll say that again; 70 percent of device companies have fewer than ten employees. There is an emerging, innovative group of entrepreneurs and really smart, thoughtful people who are out there trying to develop medical devices. And this is the food chain, if you will, for many of the larger companies. It's not to say that large companies don't innovate and have really great ideas, too. But many of these devices are the ones that percolate up and have really good ideas and ultimately reach patients. And so as we think about how can FDA and other regulators evaluate these devices in a timely fashion, the very viability of these companies depends on us making smart, rapid decisions so that the technologies can be evaluated quickly.

So, when we started thinking about what can we do to help foster this innovation, we identified three main areas that we thought we could help and that needed help. One was to strengthen the research infrastructure and help promote high-quality, thoughtful, good, useful regulatory science. And I'll talk a little bit more in a minute about that and what that means. We also talked about figuring out ways to facilitate the development of novel medical innovative devices. And then we felt we needed to better prepare for transformative technologies and be ready for the technologies when they hit our door, as I mentioned, rather than seeing them for the first time when some submission for a device comes through our walls.

So, one of the ways we conceived of for facilitating the evaluation of medical products was what we've called the innovation pathway. And on the left is the more standard pathway, which is companies come in for an IDE, which is an Investigational Device Exemption. We encourage them to come in during a pre-IDE meeting where we can have an exchange of ideas. They tell us what they're thinking about doing. We give them some feedback. They develop a research protocol, go on their way, conduct their research, and come in with their regulatory submission when we dive in and really start spending time evaluating their data. As we thought about the challenges in that model, sometimes we end up with clinical trials that are conducted with perhaps questionable endpoints, or maybe we didn't establish what the community really thought was a viable and important endpoint for a study. Maybe we didn't consider some of the science or technical issues well enough.

And so as we thought about how could we better invest our resources to shorten that timeline, we feel that an earlier investment in FDA resources may be able to shorten the timeline for both the company and for us. Now, the challenge is that tends to be resource intensive because some of those aren't going to make it as far as a regulatory submission. But we decided to start a pilot program where we would try this out on a couple of devices to see if the model worked. And we identified particularly transformative devices as the area where we were going to intervene.

This is the first product that we accepted into the innovation pathway. It's a brain-controlled prosthetic arm so we -- obviously there's a lot of war veterans who are injured or stroke victims or amputees from traumatic injuries, and they deal with prosthetic arms. This device has an implantable sensor that goes on the surface of the brain. All the patient has to do is think. They think about moving their arm. The sensor measures that brain activity, sends a signal to the arm, and the arm moves. So, there's nothing other than the patient thinking that controls the arm. We obviously view this as an incredibly innovative product and also creating some challenges both for scientific evaluation and regulatory review, and so this was the first product we accepted into the program.

One of the criticisms we've received for the innovation pathway is that it's small. There's only a couple products in it. Is it really going to make a difference? We see 4,000 device submissions a year. And I think the answer is we do think it will make a difference. We don't think it makes sense to revolutionize the way we evaluate products and reallocate all of our resources into a program without first demonstrating the benefit of the program. So, we view the pilot as a way to try out some of these ideas, to identify some mechanisms by which we can improve our other regulatory pathway, such as our PMA process, which is for the higher-risk devices. And you can anticipate us applying some of these lessons in the coming year or so as we learn more about the benefits and maybe areas where it's not so useful.

I've talked a little bit about the device development program and the way it's different than drugs, and as we talk about the early stage of device development, after someone has an idea, but while they're starting to think about developing and changing their device around, one of the key factors in device development is the ability to make iterative changes to the device early in device development and really to make iterative changes to a new surgical procedure as well and trying to marry those two together. I think historically, FDA has been very regimented in the way we've looked at these investigations. We've had the company go out and implant their device or use their device, come back and report to us. They'll make a modification. They'll have to wait for us to approve it. It's just been very stepwise and quite frankly, somewhat slow.

There's been a big push for companies to go over to Europe or outside the U.S. to conduct their early trials for a variety of reasons. Some have pointed to the FDA's regulatory barriers, and that certainly may be part of it. It's not the only story because it's a lot less expensive to do clinical trials in Europe than it is to do it in the U.S. There are barriers such as the IRBs tend to be tougher in the U.S. than they are in Europe. Some academic centers tend to negotiate contracts with companies a lot more vigorously than they might in other places. So it's not all, you know, we can't fix the entire ecosystem.

But we do feel like there are things we can do to help. And so last month, or actually now I guess in October, two months ago, we issued a first-in-human early feasibility guidance, which was designed to allow companies a little more latitude in how they explore their devices and clinical trials early on. Now, that doesn't mean that we're not looking out for safety of patients. It doesn't mean that we're not assessing risks and benefits. But we're asking companies if you know you're going to want to iterate your device, tell us what you're anticipating doing. We've come to an agreement about what that test protocol will look like and what the test results should be for those changes, and then you can go about your way within those four walls, if you will, about what we've discussed and decided is appropriate for that device evaluation. It's a way to try to allow companies a little more latitude and quicker evaluation of their products early in the device development.

Another focus for us has been on regulatory science. I mentioned that earlier. It's been a big push for the Agency as a whole, and certainly in the device world, we think there's a lot of opportunity here. And the definition of regulatory science is the science of developing new tools, standards, and approaches to assess the safety, efficacy, quality, and performance of FDA-regulated products. It sounds -- you know, the point is it's science that is directly applicable to the evaluation of the products we regulate.

There are different types of this type of science, so it could be researching how new devices interact with the body and material science, if you will. It can be developing test methods, whether they're bench top test methods; they can be clinical test methods for evaluating devices, testing products for failure mechanisms, or developing epidemiologic methods to help conduct postmarket studies, as an example. All of this comes under how do we evaluate these products better and more quickly.

The classic paradigm that we've used for a long time for evaluating products relates to looking at bench, animal, and clinical testing. And as we've thought about how can we look forward and ensure safety while being quicker and faster at what we do and also help product development faster, we think a fourth pillar of this evaluation should include computational modeling in many cases. And we've really been actively thinking and working about how can we leverage the technology that's available now and use resources such as imaging libraries and create resources that industry as a whole could use to try to improve and speed up product development. We already have approved some products based on computational modeling. For example, we approved an MRI-compatible pacemaker based on modeling of the MRI field, to evaluate safety which prevented patients from having to be exposed to the MRI field without understanding the safety and allowed us to be comfortable with the product performance.

We've been thinking about pilot programs for the computational modeling. As an example, you could imagine a program that would look at aortic aneurysms, create a library of images of normal patients with their anatomic variation, diseased aortas and aneurysms and what they look like, so that a company that wanted to develop a stent doesn't necessarily have to go do a huge clinical trial to understand the different variation of these anatomies, but they can look in a library and develop their device based on what we already know. And this type of precompetitive space, if you will, would allow any company that wanted to develop an aortic aneurysm stent to use this type of information. It's not going to necessarily obviate the need for a clinical trial in every case, but it can make them smaller and better prepare the company for the type of device they need to build.

We've identified several areas that we're going to focus on first and have been establishing public-private partnerships with industry and academia to focus on developing some of this precompetitive computational modeling space, if you will. And there's several areas where we have highlighted where we're going to focus first. So, one is the virtual human heart, so it would be models of the heart, computational modeling of the heart, of the valves, of the muscle, of the arteries, peripheral vasculature, so that companies that are developing treatments for narrowings and aneurysms would have access to important information; the model mind, so companies could develop neurosurgical tools to treat stroke; and then the one that's not up there is the bony body, the orthopedic and joints, so that companies who are developing implants and treatments for the orthopedic environment would have access to these things. We recognize this isn't going to be an overnight success. It's going to take time and effort, but we do believe in a relatively short-term, measured probably in, you know, a couple of years, rather than decades, we'll be able to build up the infrastructure that can help companies develop their products.

Finally, in, you know, the other big-ticket item for us is personalized medicine, and I think that really fits in well with this meeting and this group because, you know, as a former proceduralist, you know, every patient is a little bit different; every surgeon does their techniques a little bit differently; and we really want to move into the world where we're delivering the right therapy to the right patient at the right time. And when you think about the way we treat patients now, we give a lot of patients a therapy hoping that some will benefit. Here's some examples of a variety of drugs, and clearly not every patient benefits, but we treat a whole lot. And on the other hand, we have adverse events where we treat a lot of patients, and some patients may be at higher risk for adverse events than others. So, we'd really like to individualize the benefit/risk determination as we think about what devices, treatments patients should be getting.

Well, that brings us to what do we do once that device gets on the market or that patient is treated, and really the world of postmarket surveillance is really at a critical time and I think fundamentally changing because of the advances in technology. So, clearly it's more than just the industry and FDA or regulators doing postmarket surveillance. It really relies on hospitals and patients and physicians. Here's an example of the challenge of postmarket surveillance and why we really need to think about it carefully, not after the products are on the market, not after we see a problem, but think while the product is reaching the market about what we're going to want to know and how we're going to get that information. This is drug-eluting stent data. We approved the CYPHER stent in 2003 and TAXUS in 2004. And within about 18 months or several hundred thousand procedures, the percentage of stents going in that were drug-eluting went from 20 percent to 80 percent in that short time period. And we all know about the concern about late stent thrombosis that came up. But really it just speaks to the challenge of being prepared for the anticipated problems. It doesn't mean we need a huge postmarket study for every product, but we do need to think about what infrastructure we want in place for postmarket monitoring, so that we can be prepared to answer the questions when they come up, and we can anticipate the questions we're going to want to know.

Well, the game changer is going to be unique device identifiers, which should be coming soon to a hospital near you. This will allow individual devices to be included and incorporated into patients' medical records. It's essentially a barcoding system so that a medical record will include not only the patient's device, their model number, it will be searchable to the lot and the manufacturing facility. It really will provide incredible information. And if this type of information gets incorporated into electronic health records, and we suddenly have ready-made databases that include patient outcomes and device information, it will allow us really great opportunities for assessing device performance without necessarily needing to create individual infrastructures in different places. The need for device registries or procedure registries is never going to completely go away. But this will certainly create a really important new tool. And if we think about what the world, the device world will look like, I think we envision a UDI-centric world in the near future, we hope, where that identifier is really going to be what connects the performance of that device to hospitals, to patients, to regulators, to industry, to insurers, and I think it will really offer some incredible opportunities for us to evaluate products.

Just in conclusion, I think one of the really important philosophies we have always subscribed to, and certainly there's been great emphasis on now, is our need to look outside the walls, and I think this meeting is an example of that. We have a lot of really smart people, sometimes people who know more about individual devices than anyone in the world because we get to see the whole array and everyone's different product. But we recognize that we need to also engage with the experts outside our walls, and we have certain mechanisms that we can do that with. We have advisory panels and special government employees that we can tap into. But we recently announced our Network of Experts Program, which is an acknowledgement that we really want to be engaging with the clinical community. We're piloting this Network of Experts with scientific and medical professional societies that will allow us to quickly get access to the expertise we may need if we don't have it or expertise just to bounce ideas and thoughts and get the view of what's going on out in the community. So, that's certainly something that's very important to us.

And collaborations, as I've mentioned, some public-private partnerships, we can't do this alone. Collaboration with other government entities, with other regulators, with industry, and with the academic community, and I think if you look back over the last year we've created a number of these public-private partnerships and agreements, and we certainly plan to continue in that direction.

So, bottom line, I don't think I have to tell you that we have a real challenge in front of us. We have really exciting, rapidly evolving technologies. Devices are getting more complex. Regulation of devices is global. The idea that we have these silos of device uses is really a view of the past. We need people not just within our organization, but other organizations to work together, and we're excited to have you here today to talk about new methodologies and ways of evaluating these products. So, thanks for your attention.

(Applause.)

DR. RITCHEY: Thank you, Dr. Maisel.

DR. MAISEL: I don't know if you wanted to leave a few minutes for questions or --

DR. RITCHEY: Yes.

DR. McCULLOCH: Yeah. Thank you very much for that fascinating and informative talk. I thought I'd butt in straightaway because there's something you left out. The very disparate group we have includes a lot of people like me from outside the U.S. and outside regulation as a profession who don't actually understand fully the context of this meeting and the implications of the Institute of Medicine's report on 510(k) and the FDA's response to that. I think it would be really, really helpful if you could just give people a couple of minutes of history and explanation about why the FDA has decided the change is needed.

DR. MAISEL: Well, thanks for the opportunity. I think the story really starts about two years ago when there became general dissatisfaction with the U.S. medical device regulatory program, and we were hearing really three different things. From industry, we heard that FDA was inconsistent in their decision making, that our requirements and criteria were not transparent, that we were slow in evaluating products, and delaying good products from getting to market. On the other hand, we heard from patients, from some physician groups, from third-party payers, that we were approving or clearing products without enough evidence, that there wasn't evidence that the products did what they claimed they did, that they were effective at what they did, or that there were safety concerns. And then from our own employees, we were hearing that they were really challenged by a 35-year-old regulatory system with increasing workload, increasingly complex devices, and that it was getting harder and harder for them to do their job well and in a timely fashion.

And so in that context, we undertook a two-pronged approach for evaluating the program. Number one is we did our internal evaluation, and we issued two reports in August of 2010, so a little over a year ago, that contained 55 recommendations that our own staff came up with. And the second thing we did was we asked the Institute of Medicine to conduct an independent evaluation. From our own evaluation, we decided in January of this year to implement 25 specific actions that addressed 47 of the 55 recommendations, so almost all the recommendations our staff came up with. After we vetted them publicly and had a number of town hall meetings and public meetings to engage with industry and patients and providers, we identified 25 things. And those things related to providing additional guidance on our programs, to establishing some of the programs I talked about, better outreach to experts to get insight into how we do our business.

The Institute of Medicine had a totally independent process from FDA. They had three public meetings. They issued their report in July of this year, and the report contained eight recommendations, many of which actually very well aligned with the things we were already doing. So, for example, they said CDRH should have a Quality Assurance Program. Now, we had always had a small Quality Assurance Program, but we had identified that as an issue ourselves. We formed a Quality Assurance Subcommittee of a Center Science Council that we've established, and we're implementing and developing a broader Quality Assurance Program.

They pointed to the need for greater coordination of postmarket surveillance. That's certainly something we recognize. They talked about UDI. We have a number of other postmarket activities that Tom Gross and others will talk about later during this meeting.

But the fundamental, number one recommendation that they made that's gotten the most attention is that they recommended elimination of the 510(k) Program. The 510(k) Program is the program by which we review over 90 percent of medical devices that require our premarket evaluation. It relies on a company showing that their device is "substantially equivalent to a device that's already on the market." So, IOM criticized that process as being fundamentally flawed.

Our response has been that we don't think the 510(k) Program should be eliminated. The 510(k) Program we think in some cases makes perfect sense. It makes perfect sense sometimes to take device B and compare it to device A and show that scientifically the devices are the same and function the same way. So, I think our feeling is that there is a role for the 510(k) Program, but we've also expressed our willingness to improve the program and to identify areas and ways that we can make it stronger.

So going forward, I think as I said, we've implemented a number of these actions that we think will help clarify and strengthen the program, and we remain open to hearing about other ideas.

DR. HENEGHAN: Thanks, Bill. Great talk. There's a number of points I want to come back to. But you've just highlighted one of the major problems of the 510(k) and the equivalent route as a manufacturer is what's to stop me just sitting on the outside of your whole processes waiting to see which devices you back and which go to the market and then jumping in on an equivalent setting and developing -- that's what's happening worldwide, and that's where the major problems are coming. Knowing the rigorous application of say your new arm, but if that goes to market and is a profit-making device, industry around the world will just jump on top of you with equivalents and just go back to your 510(k) immediately, and that's exactly what we do in Europe is the equivalence evidence requirements. And that's the major problem.

DR. MAISEL: Well, I would say it's a problem and it's a strength. I mean, I think the strength of the program is that iterative -- and we talked about how devices iterate very quickly. The strength of the program is that new, iterative changes that are incremental can allow for novel technologies to reach patients relatively quickly. On the other hand, we have acknowledged there are times when the 510(k) Program is not the perfect program for evaluating products. We certainly have examples of some of those types of devices. So, I do think there are pros and cons to the system, and what we'd like to do is make it -- strengthen the program so that we're more consistent in applying and making good decisions and minimize the types of devices that get out there that maybe don't perform as well as we'd like.

DR. GROVES: I guess obviously, the problem from patients' point of view and clinicians and indeed payers of healthcare is that if you're demonstrating equivalence to something which has an inadequate evidence base in the first place in terms of effectiveness and safety, then you're just making more of the same bad stuff.

DR. MAISEL: I think that's one of the fundamental criticisms of the program. I think one of the -- for those of you who have read the IOM report, I think one of the subtleties and maybe different perspectives we have is there is some history to the classification of these products, and it's not the substantial equivalence determination in isolation that is intended to assure safety and effectiveness, but there are also what are called general and special controls which are in place. So, for example, we could have a special control that requires a certain type of performance testing for a device to make sure that it meets certain standards as an example. But I mean you're highlighting some of the fundamental concerns that have been raised about the program.

I will say there certainly are anecdotes and maybe more than anecdotes of devices that haven't performed well. When we look at the safety of the devices that have come through the program using some marker such as recalls or patient injuries, you know -- there's a low rate of recall for these devices. Now, we recognize that that's a bad -- not necessarily the ideal marker for safety of a device. But there are 120,000 products that have come through the program, and our feeling is that the vast majority perform well. There certainly are some devices that don't. But most of them do.

DR. HENEGHAN: Can I come back in you, Bill? I'm going to go the think tank discussion while you're up if you don't mind.

There's -- and when you read the IOM report and all of these reports, there's a particular issue that we continue to intertwine safety and effectiveness together. That's a problem and -- because effectiveness, by its very definition, its benefits outweigh the harms. So, it is safe. But safety, as you've just showed as a mobile app there as the example, is a safe device to go on the market. It may not be clinically effective because it may not have any clear, proven benefits. But the example earlier was about the defibrillators, if you wanted effectiveness. And when you read these reports, there's a continual intertwine of safety and effectiveness and -- them out, and actually I do think there's a need to think about thinking of them differently. Devices, early devices you say are safe, and then of the safe devices, which are effective. The problem at the moment is most industry would use the safety information to say that's effective and then use that to take it to market, and that's where the speed comes, that's where all the problems come. And so thinking of them too differently may help us in our recommendations to industry.

DR. MAISEL: I don't -- I guess I don't view them as so distinct. I mean, we certainly define safety and we define effectiveness, and we have conversations about is a device safe and is a device effective. In the end, we need to take the information about both of those things and make a benefit/risk determination. And I don't think there's a black and white, is a device safe. Let me give you an example. If someone put in a stent into a cerebral artery, and two percent of the time, when you put that stent in, the artery burst, and the patient died, is that a safe device? I can't tell you because I don't know what it's being treated for. I don't -- maybe the patient dies 100 percent of the time if I don't put the stent in.

So, I don't think we can talk in strict isolation about safety. I think they're intertwined in a way that we can't separate. I agree completely with you that we need to define safety for a device. We need to define effectiveness. But we can't get away from that benefit/risk balance.

DR. HENEGHAN: What do other people think about that point?

DR. SEDRAKYAN: You can add another example to what Bill said here is in orthopedics, and Steve Graves is here, we have a revision surgery. And a revision surgery can occur after a particular implant use because of a safety concern, but it's also an effectiveness measure because if revision occurs ten years after initial surgery, then it's a device that potentially had its life -- basically it has performed successfully, and it can be an effectiveness measure if it wasn't revised for ten years. But if it breaks down within two years, it become suddenly a safety measure as well.

DR. BRUCE CAMPBELL: May I comment on that? Because I mean this is integral to our guidance for the U.K. on procedures quite apart from medical devices, which we now own about 400. And this whole balance of safety and efficacy is absolutely fundamental. There is no way that one can comment, as has just been responded, on whether something is safe until you know, number one, about the context of the efficacy and number two, the context of the condition that it's being used to treat. That is absolutely fundamental, and it does make it difficult. And the other thing that made it difficult is the point that was made further down the table just here a little while ago, which is about the business of saying that something is safe and efficacious compared with placebo rather than compared with whatever current management is. That complicates the issue still further. But I don't think you can separate them just because it's complex.

DR. GREG CAMPBELL: Yeah. And the thing that I would like to follow up with is the fact that FDA issued this benefit/risk guidance document draft in August, and when we ask our advisory panels, we ask them usually a question about safety; we ask them a question about effectiveness; we ask them a question about the benefit/risk. And so I think it's a matter of understanding within the context of, you know, how is the device to be used? What is its intended use? You know, what is the safety profile? What is the effectiveness profile? And then figure out if there's a balance in terms of risk and benefit.

DR. BRUCE CAMPBELL: If I might make just one other point briefly, one thing that's not been mentioned very much and is complex is patient input to all of this which, you know, we try our best to do. And my concept of that from the start -- when we started trying to do this about 5 or 6 years ago, was that what I actually wanted to know from patients was their concept of benefit against risk, not just their understanding of risk, which has been written about a lot, but their concept of benefit against risk. And the trouble is the methodology for that is very elusive.

DR. RITCHEY: So, I think there's a lot of comments left on this, and we have a think tank in a few minutes, and I think this would be a great thing to continue then. Are there any other questions for Dr. Maisel?

(No response.)

DR. RITCHEY: Thank you so much. Next is Dr. Sedrakyan's discussion of the conceptual framework for evidence evaluation for devices and procedures.

DR. SEDRAKYAN: I'll try to be brief. Just a summary of conceptual framework that we developed in collaboration with FDA. Really a researcher's perspective, because I don't have any slides, if you look into your binder, there's a paper there, Medical Care, published in Medical Care, titled "Framework for Evidence Evaluation and Methodological Issues in Implantable Device Studies." So second page, like to refer you to second page, and I will just go briefly and talk about this complex figure there.

Just for purpose of the discussion and putting it out there as number of frameworks are going to be offered for discussion. So, we took the perspective of a researcher. So, if you're a researcher that about to evaluate a particular technology, what are those issues in a clinical trial context and observational study context that we need to pay attention to? It almost serves like a summary so to have a predetermined plan, that you have a plan in advance of initiating any evaluation. What would be a systematic review of appraisal of evidence or thinking about randomized clinical trial or observational study? What are those variety of factors you need to pay attention to?

On this graph, we also tried to depict the arrows and the thickness of the arrows to define the strength of their association, so that what we're advocating with this figure is really that a researcher think about a particular context and think about the framework, almost sort of like a paradigm, and then lay out all the factors they think are important for evaluation of a particular technology. So, if you're comparing device A to device B, then you need to think about interventionist and particular surgical setting. So, one might need to think about surgical volume issues, preferences, training, even gender of the surgeon, and age. There are studies about that. Think about completeness of the surgery, success, and other technologies that are used during a surgical procedure.

Then, think about also hospital characteristics. In certain situations, hospitals have quality systems in place that will make a particular technology look better than it will look in average, community-based practice. So, any factors that are related to a hospital volume, ethical standards in a particular hospital, what is a teaching hospital or a community-based hospital, and if it will offer advanced critical care because depending on the setting you're doing, the study particularly has an implication for success of a procedure.

Then, we talked already about the device factors. Iterations change of a device technology, of course, over time and whether you can capture the entire spectrum of a device technology when you are doing a particular study. Then, you also think about patient characteristics and define those variables that researchers think are important to be part of that particular study and investigation.

So, the trial quality evaluation certainly should be part of it if it's a systematic review, but also think about difference of a device from drugs, and Bill alluded to the differences between drugs and devices. We're dealing with often very specific situations when we can't do blinding allocation concealment. So thinking about the quality, this also should be part of the investigation.

Now, all this premarket -- clinical trial evidence that most of the time occurs premarket is going to have an impact on the use of technology in the real-world setting. So, it has an influence on on-label versus off-label use in the community-based setting. So, if you're doing an observational study, you would know that the trials that have been done before, they might have a specific indication. So, when it will be evolving into clinical practice, then you will be dealing with totally different population that might be receiving this device. So, also thinking about you have to revisit the patient characteristics that are important.

An additional factor that we thought about is this really access to particular care and media-based factors. Why can that be important? Here's an example that I wanted to talk about, these metal-on-metal hip implants. So, there is a concern about metal-on-metal hip implants associated with higher rate of revision compared to conventional, so metal and polyethylene based, metal hip implants. And there's a lot of attention from the media to this question. Now, patients have raised awareness of this issue and fear. They're going to go for additional checks, and this is the media influence potentially that they're going to get additional investigation; they're going to get their level of metallosis measured, which is the ion level in the blood, and some they're going to get some concern; they're going to be concerned about it. And it might lead to revisions that might not have occurred otherwise.

So, all these factors are kind of part of the investigation. This is the model that we're advocating to think about for researchers to -- approach to the clinical investigation. This is maybe five minutes about it. And we can move on to the IDEAL framework -- any questions about that? The paper also goes into a detailed discussion of those factors that are relevant in the clinical trial setting and observational trial setting, but mostly this is the framework that we talked -- wrote about.

DR. CLIFFORD: Art, I was just wondering, I mean I really like what you've put forward here in terms of getting it all in a picture. I'm just wondering, from the way I've seen things occur with medical devices, I see this as more of maybe a roundabout circle in that we're not necessarily starting with the RCTs that feed into the real-world observational studies. We're lucky if we get the RCTs. So, I'm just kind of wondering about the linearness of the presentation and if you had given any thought about -- I mean it's going to get more complicated, so I can appreciate maybe you trying to simplify it for this point in time.

DR. SEDRAKYAN: Certainly. I mean the issue is that the limitations of this framework, it doesn't go to the entire cycle. And that's why when we connected with the IDEAL group, we realized that we're missing the dynamics with this process also because IDEAL has a surgical perspective of investigation. So, we had like researcher's perspective approaching two particular questions say, okay, what do I do; how do I investigate this particular technology? While yes, the dynamics, we're missing the dynamics. Doesn't start with RCT. It starts with learning. It starts with applying. It starts with feasibility trial. It starts with early majority, then large clinical trial, and then observational study for postmarket surveillance. So, that's not easy to capture in this graph certainly.

DR. MARINAC-DABIC: All right. Are there any issues for the microphone, or is it working?

DR. McCULLOCH: None that are important.

DR. MARINAC-DABIC: No.

DR. McCULLOCH: Thank you very much, Danica.

(Off microphone.) quite being on apologies, if you knock us down on the sidewalk, the first thing we'll say when we get up is sorry. So, I'm going to start with a couple of apologies. First of all, for confining this great group to listening so much this morning. I think that Carl's brief -- very much in the power of group -- focuses very much on the power of group interaction -- into that -- setting a bit received and doing -- so we might have to put up with -- me. I was very pleased that Bill was kind enough to explain the FDA point of view because I knew nothing about this --

So I'm going to do the reverse now and give those who haven't -- it already a fast course in IDEAL and why it's relevant to today's discussions.

So, of course, it's IDEAL. It's a new paradigm for the scientific evaluation of interventional treatments. The words in red because they may require -- definitions. Paradigm is one of those words we don't use in everyday conversation, and we think we know what it means. I take it here to mean a framework for thinking or a way of looking at -- interventional therapy and defined intermittent. This paradigm --

UNIDENTIFIED SPEAKER: Bill, I'm going to see -- there's a problem with the -- microphone.

DR. McCULLOCH: -- there is a framework which describes the way in which innovation actually happens in these areas, which is different from --

UNIDENTIFIED SPEAKER: -- it's just -- now you can?

UNIDENTIFIED SPEAKER: I don't know.

DR. McCULLOCH: Logically, directly flows from that series of recommendations because once the natural history had been mapped out, you can see the stages it went through, the problems that there were at each stage, and therefore, the logic behind what the study design and reporting guidelines should be --

I'm not really sure. Is that better?

(Indicates yes.)

DR. McCULLOCH: Okay. So I can probably speak a bit lower now.

And the third part were a series of proposals because one thing that came out of their discussions, and I'll go into the history in a minute, was the obvious conclusion that one of the reasons we had a problem with evaluation of surgery and similar techniques was that the environment for doing science in these areas was pretty weak, and the proposals are around getting those who have the power to change that environment to consider doing so.

So, a bit of history now. This goes back 15 years, and at the start of the evidence-based medicine movement, surgeons came in for a lot of criticism, and in particular there was this scathing editorial in The Lancet by Richard Horton likening us to this comic figure of a bombastic surgeon from films in Britain in the 1960s, Sir Lancelot Spratt, because of our poor quality of methodology and our lack of randomized trials. We couldn't really argue with the facts about that, but there was a fairly vigorous response from a lot of surgeons, including me, trying to qualify this and point out the difficulties we had in doing good trials. And this was a controversy which, unusually for surgical arguments, eventually generated more light than heat because what happened was that the real difficulties began to emerge from this debate, and it also became clear that we weren't alone, that we in fact were in good company with a lot of other groups.

I call this the interventional therapy syndrome. There's a bunch of therapies which are affected by this syndrome, and they have things in common. They require significant therapist skill and training, they cater to the individual patient, and they act by physical or psychological effects in the patient. You can see we're getting close to FDA definitions here. And this, of course, includes surgery but also all types of interventional quasi-surgical techniques which various medical specialties have started to do over the last few decades and things like physiotherapy, radiotherapy, psychotherapy. What I didn't appreciate until meeting with Art and Danica was that there were actually strong resonances with the problem with devices as well.

And the syndrome features which were identified by this debate were fairly straightforward. All of these techniques when they're new require an iterative phase of rapid modification, which we've talked about. They suffer from a problem of definition because they're all tailored therapies that depend on skill, so it's hard to try and draw a circle around them and saying this is exactly what this is.

I'm sorry, Anna. I forgot you wanted to see me. There I am.

So, the learning curves are another problem because, as we've discussed, these can interfere with proper evaluation if a trial is started whilst everybody's learning. And quality control becomes a big issue.

Finally, and probably most importantly, a lot of these therapies generate very strong preferences, particularly amongst the therapists -- surgeons, of course, are a classic example of this -- but also and secondarily, amongst the patients. And one of the things that makes it difficult for patients is what this was referring to, that you often have very asymmetric choices to make between a major operation which is life-threatening and taking a pill or undergoing a procedure which looks much less threatening, but might be less effective.

So, 2007 to 2009, my then head of department, Professor Jeffrey Meekins (ph.), convened a series of meetings at Balliol College in Oxford, which were very productive in working out the features of this debate. And this is where the IDEAL framework came from, and you can see where the acronym comes from because it's the first letters of the stages that we develop in the paradigm for innovation in these therapies. So, we deliberately designed this so that you could see the similarities to the drug development Stage 1 to 4 pathway, but also the differences.

And, essentially, there's a different question to be asked at each of these stages. The idea stage is basically the first-in-man study, and at this stage the only question is, can it be done?

After that, you have the stage of rapid iteration in small numbers of patients, which we've called the development phase, and here the question is, how can it be done?

Once the technique, or in our case today, the device is stabilized, there then becomes a further phase, which we've called exploration, where we begin to develop some ideas about the capacity of the device to do good and incidentally, also harm. And here the question is, is it worth doing?

And then finally, after all of these stages, we reach the stage at which we're being criticized in the first place, which was comparative effectiveness hopefully, but through randomized trials. And the question there is, is this better than what we do now?

And then the last question of all is that which was being referred to earlier with long-term surveillance. If we look at the long-term on the surprises out there in terms of late effects, rare effects, or effects where you change the indication, it's not to do the technique for different things.

So, as I said, that was the framework. The recommendations that flow from the framework are when the group turned its attention to looking at these are the questions we're trying to answer. This is the context in which we're trying to answer them. What are the study design recommendations? What are the reporting recommendations for each stage?

I haven't time go through all of these, you'll be glad to hear, but I just want to give you a few headlines. So, on first-in-man studies, the main suggestion that was put forward was that these should be available to professional colleagues, that there should be a register, and there should be an obligation to report. On development studies, we sketched out the principles for the types of study design and reporting that should be necessary when you're trying to report these early phase studies where the technique you're looking at keeps changing.

And likewise, for slightly later studies in the exploration phase, we try to map out a plan for a kind of integrated approach to collaborating between centers, doing the same thing, and reporting the same data, and trying to build consensus so that we ended up able to do a trial together, often a difficult thing for surgeons to do, as you know. And we also looked, and I think Jonathan Cook's going to speak about this a bit later in the meeting, at the difficulties of randomized trials in new interventional therapies and what can be done to help to make trials more viable. Finally, we emphasized a lot the importance of registry data with long-term studies, and I was fascinated to hear the idea, which I can see quite clearly, that unique identifiers plus electronic patient records may make some of the formidable difficulties of formal registers go away. So, those are the recommendations.

I said the third arm of IDEAL was about the proposals. Here we identified the four groups of people [slide reads: 1] Editors 2) Funders 3) Regulators 4) Professional Societies] that we thought had a major influence on the environment for this kind of research and came up with suggestions about how they could make life easier for those trying to produce good-quality evidence. Now, I'm not going to read these all out. I'd just like you to look at the regulator column, since we're in the FDA, and we suggested providing rapid, flexible, and expert oversight, very much along the lines that Dr. Maisel was talking about. Linking provisional approval to evaluation or registration. Accepting our study designs. And raising the burdens of proof for full licenses to efficacy level.

There's a lot more detail there, and I would ask anyone who would like to look into this further to go to the website, which just went live last week, incidentally, but you should find most of what you need to know about it there.

So, I want to talk in the second part of this brief chat about how this relates to the current issues that the FDA faces in regulating devices. IDEAL is a potentially common framework for discussing research in a wide variety of clinical science areas. It does provide a realistic description through the IDEAL framework of the stages that innovation goes through, and it's pretty clear when you look at examples, that devices follow this framework pretty closely.

The recommendations, therefore, point clearly to relevant problems within the FDA's current discussions and potential solutions for those. And because the IDEAL framework stretches from first-in-man all the way out to postmarket surveillance, there's a template, if you like, for each stage in the product life cycle. So, I guess what I'm putting forward is the argument that linking device regulation doctrine to this framework could help interaction with academics, clinicians, and indeed manufacturers in many disciplines.

I'm going to just talk briefly about a couple of the IDEAL recommendations and how they feed into the argument. Here, I'm going to interpose some of my own opinions, and I take full responsibility for those, and they don't belong to anyone else who might wish not to be blamed for them. So, the first thing is the explicit recommendation of declaring when you do a first-in-man study. This of course has the potential to establish primacy and from a commercial point of view in a post-510(k) world, this might turn out to be a valuable thing for industry. I'd be interested to hear what they think. It would, of course, require careful monitoring, but we also made the recommendation that even adverse experiences should be reported, anonymously if necessary, and that turns out to be quite problematic. But I'm not going to talk about that just now. I'll have an opportunity later.

If we look at the premarket world from an FDA point of view in a post-510(k) environment, we've made recommendations for early stage studies -- the device design is stable which we've called prospective development studies and then an integrated study design where a collaborative group start with what may well be a known, randomized prospective study, and then develop a randomized study together. We have also put forward a proposal, as you've seen, that the bar for full approval should be a demonstration of efficacy. Now, putting these recommendations and this proposal together to me strongly suggests that in the future, we're going to end up with a much greater role for what you call coverage with evidence or provisional approval than there is in the future. I think that seems to me to be a logical consequence, but that's something I'd really like to hear other people's views on.

Turning to postmarketing surveillance, we've recommended comprehensive disease or condition-based registries, but also -- more in the fine print if you like, recommend the statistical process control monitoring. These are quite expensive recommendations, and clearly it's not going to be possible to do this kind of registry for everything, and there are going to have to be difficult decisions made about, first of all, who pays for it, and secondly, whether it's worth it when one balances time and effort versus risk reduction benefit, which I think Bill Maisel was referring to.

So, just finishing up really, the last couple of slides, another implication I see about this is that there's going to be a need for close cooperation in the future with industry, but without financial conflicts of interests, so a kind of arm's-length relationship, which allows proper evaluation of devices by other groups. If you combine that with the need for new thinking and experimentation around some of the proposals that IDEAL had put forward for early phase study design, I think there's a need for a new type of surgical or Interventional Trials Unit, which we've called a SITU. And we've happily just agreed to set one up in Oxford which will open next year and where they're interested in using this to link with some of the FDA initiatives such as the MDEpiNet initiatives that Danica has explained to me.

So, I'll conclude there, and just to go back to basics -- with the final slide as a template for looking at regulatory science for devices, and the IDEAL recommendations are based solely on the need for the best evidence and new device improvements to a wider field of healthcare science, and I think that a couple of good aims for our discussions over the next day -- two days should be to determine whether the IDEAL framework and recommendations are a suitable basis for regulatory science or whether there are some bits of it which we think are wrong and need changing, and to study the practical implications of doing that, and finally to propose solutions for the problems this will bring to light.

Thanks very much for your attention.

(Applause.)

DR. RITCHEY: Any questions about the IDEAL framework?

DR. SEDRAKYAN: If there are any questions, can you state your name and affiliation if you still want to mention that?

DR. BARKUN: Is this working? Jeffrey Barkun --

DR. GRAVES: I'm sorry.

DR. BARKUN: I'm sorry.

DR. GRAVES: You first.

DR. BARKUN: Jeffrey Barkun, one of the group, the IDEAL group that Peter just showed the recommendations of. I'm just questioning based on the agenda that I saw, whether everyone understands what the IDEAL -- I understand the IDEAL recommendations that were shown at, but what the IDEAL stages are and what they exactly mean because it's actually crucial to the concept of safety and efficacy. For example, if you're talking about a technology which is in Stage 0 or Stage 1, safety means one thing because you've got certain patients; it's being done in a certain context by certain physicians or surgeons with a certain learning curve, whereas when you're in Stage 3 or even 4, for that matter, it's a totally different ballgame because you've changed all those variables.

So, I'm just wondering if it might be worthwhile, Peter, just to show the actual stages because I don't know what word's coming up, and in particular, to link the concept of pre- versus postmarket to what are the stages because that's going to be crucial to the discussion here which is to link the FDA concept, which is all based around premarket-postmarket versus -- not versus but in conjunction with the IDEAL concept which is based that a technology being approved, it leads a certain technology life cycle. And when you do a snapshot at any given time, you try to figure out where are we at this point in the life cycle, so that we can evaluate it.

DR. McCULLOCH: Okay, Jeff, I may not be able to answer your question, but -- because I think that these are two different ways of looking at the world, and I have personally, as I say, been on a steep learning curve this year trying to understand the FDA way. And the dichotomy there seems to be in that world, it's either premarket or it's postmarket. And I've struggled with mapping that onto IDEAL.

The best I can do is say that by and large, it looks to me as if the I, D and E, the first three stages, map onto something like what the FDA are talking about with premarket approval. But it's easy to think up exceptions. It's easy to think up situations where that doesn't apply or where, in fact, it moves right to the other end of the life cycle because these are not orthogonal ways of looking at the world. They're completely different paradigms. So, I hope that's helpful.

DR. MARINAC-DABIC: If I can just respond a little bit, even though we've been using premarket and postmarket, I hope that we are by our presence and commitment to this meeting really portraying a strong commitment to Total Product Life Cycle. This is something where the direction of our agency and center is going, and I think the premarket and postmarket terminology are really clearly stated here just so we can talk about certain phases. But ultimately we are all about generating and appraising the evidence from the very idea when the idea about new device is conceived toward the, you know, all the phases of the evaluation before the approval for marketing is received and then moving to the simulation and a rigorous and robust postmarket surveillance.

DR. GRAVES: -- Steve Graves here. I was really very interested in your talk and enjoyed it very much, and I can see very clearly how the whole process works with respect to a new device, an absolute new device. And where I have a little bit of difficulty in thinking about this process is, how does it work for established devices where you're adding a new device into an established device?

And, you know, from my experience, you know, I can talk very -- a lot of detail about joint replacement. And currently, the Australian registry monitors 1,300 different hips. And so, we have another device coming in, and what we know is that in that 1,300, there is a handful which are fantastic devices, work really, really well. Most of the rest don't work as well. And we know that most of the devices coming into the market don't work as well as the best devices.

So, how would you use this system to assess those devices, do you think, the new devices coming onto the market for joint, when we already have great devices? And that -- so the bar is very high. That risk-benefit changes completely, and the bar is very high. How do you work this system to assess those sort of devices?

DR. SUMMERSKILL: Stephen, can you speak --

UNIDENTIFIED SPEAKER: (Off microphone.)

DR. GRAVES: I don't -- I think the whole point with joint replacement is no one is dealing with it very well, and that's why we've got all these disasters on our hands.

DR. McCULLOCH: I think any quick response to that is going to be not fully considered. So, I might wish to change my mind later. But the first thing that seems obvious, because you've chosen that example, is the importance of registry data and good, high-quality registry data because you're dealing with fairly, fairly rare and usually relatively long-term events, and you're dealing with events where a randomized trial is clearly completely infeasible as a way of setting the bar for new competitors.

But the second point I'd make is the one that seems to me to flow from some of the FDA's discussions over what they're going to do in the future. If you have a world where somebody has designed the ideal artificial hip, and you can show that this is very high quality, will that in the future not imply that they have some kind of privilege similar to the patent rights that are on drugs? And that will make the bar much higher for other people to make me-too devices and join them. Now, industry may review that with horror or delight; I don't know. But it would certainly be a game changer for that situation.

DR. VANDENBROUCKE: May I come back to -- speaking. May I come back to I mean what I see on this area, it will make a difference between the IDEAL recommendation and the regulation of the devices. The IDEAL recommendations are all about the self, kind of self regulations for surgeons when they develop something. And I recall from the Balliol conference, which I attended also, about the overwhelming emotion of the surgeons being present was angst, fear, which we didn't associate with surgeons in general. But it was there because the overwhelming reason to bring this in the open was, am I doing something wrong to a patient? That was the overwhelming -- motivation for the -- all the stages as far as Stage 1 and Stage 2A and B, of development. And that's what they wanted to be responsible of and what they wanted to have in public.

While if I think that about the other extreme, think about drug regulation, drug regulation only starts after these early phases are over. And then the licensing only starts there. So, the licensing starts only at Stage 3, so to speak of IDEAL. And so, the similarity with the devices, but also the difference, the similarity and the difference might be that maybe the devices you also need to have these early stages in the open, regulated, being responsible of instead of -- which is different I think from what happens in the pharmaceutical industry. So, that to me is a big difference between these two processes which we should take note of.

DR. SEDRAKYAN: I kind of think that it probably goes beyond Stage 3 of IDEAL. The reason is that like Stage 3 in IDEAL means early majority or substantial number of surgeons at least adopting a particular technique, the way I understand, but it's not necessarily the situation with a lot of premarket trials and studies that lead to the approval. Some of them might be based on case series. So it's even Stage 1 or Stage 2A just --

DR. GREG CAMPBELL: So, let me jump in here, too. I think FDA, even in the drug regulation, worries about things before what the drug folks call Phase III trials. And the reason is if you're experimenting on humans with a product that's not been fully reviewed, there is a requirement, at least in the United States, for an investigational new drug application to the Center for Drugs, or in the case of devices, an investigational device exemption. So, we are actually very interested in, you know, what happens even at the very beginning, in terms of experimentation on humans. That's not to say we're not also interested in the bench studies and the animal studies as well as a prelude to that.

And so, the early feasibility guidance document that Bill Maisel talked about really tries to address what I think has been referred to in IDEAL as Stage 1 and Stage 2A, certainly in terms of thinking about first-in-human and thinking about early feasibility studies. And although a lot of the resources at FDA are concentrated on the -- what I would call a pivotal trial and what you would call your Stage 3 in IDEAL, in my group, in the statistical group, we spend over half our time worrying about study design and less than half our time worrying about reviewing products after studies are completed.

UNIDENTIFIED SPEAKER: Peter --

DR. VANDENBROUCKE: Just to -- in the paper, Dr. Sedrakyan, I mean the things he was discussing about were things about evaluation of benefits in a kind of Stage 3 mode, while I recall most of the discussion at Balliol was about Stage 1 and Stage 2A. And so, that's a difference of perspective. That's what I wanted to bring out.

DR. GROVES: I just -- I don't know if it helps to think about the duty of care that everybody in this room has to patients, whether you're a manufacturer or regulator, a surgeon, anybody, a family doctor. What I want to know is whether we, at the end of this two-day process, can come up with something that will help surgeons when they say to patients I'm going to do this operation on you, and I'm going to put this thing in you. This is the level of evidence. This is what we know about it. We've tried it in our unit. We like it. We think it works. Or lots of people have tried it, and they've been registering the results, and someone's keeping an eye on that. Or there have been randomized control trials which have got some safety information, but it's limited because if this thing is going to go wrong, it might take some years before we know.

Don't -- isn't that what patients need to know before they can truly give informed consent to have one of these things put in them? Isn't that what we're trying to establish? I'm just sort of getting a bit bogged down between how IDEAL might inform the FDA process. I think it might be easier if we remember that's what we're trying to achieve at the end of all this.

DR. HENEGHAN: Okay. I'm going to come in now and try and draw something to the discussion because we could go off at a tangent, couldn't we?

Let's take it, as we all agree, that there's a need for better evidence around devices, so by the time I get to my requirement of my device it might be the evidence in place. I think what we need to think, and I'd like perspective, is what should be the priorities right now for us to put recommendations? What is required now to improve the situation? Because we could come up with wonderful recommendations, but at the moment, the FDA seems to be trying to produce documentation to facilitate a number of stakeholders. And so I'd like you all to think, as different stakeholders here, whether you're industry, academia, or whether you're patients, what should be the priorities right now to improve the situation?

And I'm going to hand that over because we need that prioritization to think about because I think we need to put recommendations in that are feasible and pragmatic. Richard.

DR. LILFORD: I've got just one idea which hasn't come out strongly in the IDEAL guidelines, although I think generally they are very sensible and they're logical and logically on sound -- logical basis. But there's one idea that doesn't get enough attention there, I don't think, and that is the idea of randomizing from the very first patient. There are a huge number of advantages to doing that, and I'll come back to those later on when we talk about the learning curve and different skill levels and so on. But just imagine, as an illustration now, there's a paper in The Lancet only a couple of weeks ago about not a device, but it could easily have been a device, injecting stem cells into the coronary circulation in two very small groups of people. And although they started with the same kind of ejection fraction or whatever it is, and the one group improved significantly more than the others, I -- that sounds to me, from my original teaching, that a classical case of selection bias, and all that could have been swept away if they had in fact randomized from the very first person.

DR. HENEGHAN: People, help me here. Is somebody going to take these priorities down? Is it -- can we -- we want this --

So your priority, Richard, would be to improve the early evidence base -- designs at the very early, first -- in the very first evidence?

DR. LILFORD: Yes -- first of all, if you do have a, you know, a really good control group, preferably randomized from the start, you don't lose any of the purposes that you would -- in not having it. But you do get some advantages. If the effect is truly spectacular, and this was quite a spectacular effect captured in The Lancet and picked up even in The Economist, then you've got a much better design to inform future patients for future study designs, and you've also got information about the trial, the learning curves and so on, changes of devices, and generic classes, possibilities for indirect adjusted comparisons. So those sorts of advantages, I captured that years ago in a paper called "Tracker Trials" which explains sort of fundamentals logical of that.

UNIDENTIFIED SPEAKER: Richard.

DR. KUNTZ: My name's Rick Kuntz. I'm from Medtronic, administrative person. I just wanted to follow up on Dr. Groves' comment.

If we look at what potentially might be a priority from an industry perspective is a way for the regulatory agency to keep pace with technology from -- with respect to the offering technologies to patients. And Dr. Groves, you hit on the point I think which is critical which is, you know, what do patients want to know. And some devices, the level of certainty about performance might be preferable to a patient based on their desperate situation, whether they need something, or there's a reasonable alternative. I think, though, that the Agency and most regulatory agencies, and all with great intents, have been working under the concept that all information that can be obtained has to be obtained before release because physicians or other users might not inform patients specifically about those concepts and certainties, and the dissemination about levels of certainty are very difficult for patients to understand. So you can see that as there's more interest in understanding the durability of a device, the long-term effects and outcomes, necessarily the amount of evidence required before release has moved out.

And what happens is that, and appropriately so, as we want to know more and more about devices, the release of products will out further; the premarket studies get bigger and bigger and bigger. Do we have -- and then the other problem I think is that there isn't a clear mechanism by the Agency to pull back devices very easily if, in fact, we find that new information leads to something that requires them to be withdrawn.

Because of those two barriers, we are in a situation where the demand for increasing evidence will basically increase the gap of technology and then the availability of product. And I think that a priority for me would be to understand some way to classify certainties in the path that IDEAL laid out in ways that patients can understand and in responsible ways for industry and caregivers, to disseminate those levels of certainty so that earlier release of technologies can be available, but appropriate technologies are not going to be used for patients if the uncertainties don't match the patient preference or the provider preferences.

So I don't know if I made myself clear on that. But what we look at is as you start to look at these nice steps that IDEAL has laid out, and others, this naturally doesn't mean that you only release it at the very end. We have to have a way to be able to release product during the path that matches the needs for patients combined with a very nicely described set of certainties around what we know and what we don't know.

DR. HENEGHAN: Just come around the room, come here.

DR. ASHAR: Binita Ashar from FDA.

I think the thing that I would select as a priority is how people make choices. As we look at every device, I mean we try to help manufacturers build the best devices that they can. As we release these out in the market, the thing that I worry about or think about, that I spend every day with manufacturers thinking about, is how these devices are going to be used. And this is done on many levels.

So, for example, we have, you know, hundreds of hernia mesh products out available on the market. All of them have similar indications for use. How do surgeons, how do clinicians navigate which device to use over another device? And then there is the operative procedures that, you know, there's multitudes of those. But how I as a regulator can -- how can I inform the users of the information that they need to make these choices?

And other choices include -- there are some devices that help you make more informed decisions in the operating room. For example, it may be something that helps superimpose the image onto the liver so that you can perform an ablation or perform some sort of procedure knowing the location of the tumor. And the thing I wonder about is how does this change practice so that a surgeon may take the patient to the operating room versus having an interventional radiologist address the same tumor? And how -- does the labeling that we've created inform users about how to make those choices? And in any sort of clinical study, are we informing the patients of the risks and benefits as we understand them and as we can foresee them for the studies, you know, for making those decisions.

Another level to this is not only guiding if a patient goes to the operating room or the interventional suite but if a patient perhaps chooses a procedure versus choosing a drug. This may be, you know, along the lines of therapeutics. If a patient goes for a facelift versus getting Botox or getting some sort of, you know, other procedure, how are those decisions being made? So I think that this group could be very helpful in that regard because that kind -- that's at the border between our device regulation abilities and the practice of medicine. And sometimes I think we get faulted for intervening into the practice of medicine when we're simply trying to answer some of these questions for clinicians.

DR. HENEGHAN: So there are lots of problems there. But if you were to change the evidence base, what would your priority to say this is what we would require? What would your solution potentially be then?

Maybe other folk could help.

DR. ASHAR: I don't know if this addresses your question directly, but I thought that the table or the diagram that Art proposed was very, very nice. And the only suggestion I would have along these lines is that there is a single arrow going from randomized control trials to observational studies, and perhaps some thought into considering how choices are made from the very start so that it's a two-way arrow may be something to consider.

DR. HENEGHAN: And is that going to be different for patients and providers and the industry, or will one size fit all?

DR. ASHAR: I think it depends on the situation. If it's an aesthetic device, it may be very relevant to get a patient's perspective. If it's more of an intra-operative, decision-guiding tool, then you're going to need to get the relevant, you know, experts involved there.

DR. HENEGHAN: Okay. We're going to go around to Susanne and --

DR. LUDGATE: I was very interested in what Professor Lilford said, and whereas there is a place obviously for RCTs, I would really take issue with it -- with RCTs early on. I mean I think if we take an example like transaortic valve replacement, there's no way that you can do an RCT early on. You can't do it against no treatment because these patients need a valve. What are you going to do it against? You've got to build up, you know, the surgeons' expertise. Can they do it? Under what circumstances? What are the limitations? And build up the criteria for use. So I would really question doing randomization as early as you suggest.

DR. HENEGHAN: And come back on that, Richard.

DR. LILFORD: I'll come back. Yeah. I'm not saying it can be done in every case, but there are many cases where it could be done, and I gave I think a perfectly good example, and there are others as well. I think a lot of the EVAR trials taught it, if not the very first case, okay, but very, very early on, and in two strata, one against operation against best medical care, and the other was two different kinds of treatment.

DR. HENEGHAN: Rita, do you want to come in?

DR. REDBERG: Thanks. Rita Redberg, UCSF.

So I would -- my comment. The trial was done -- the major trial was a randomized control trial, and it was done against inoperable and patients, so it certainly can be done for devices.

But I actually wanted to follow up on Binita's comment because you mentioned surgical mesh. And that is a good example, and I'm wondering how, if we were going to be imposing the IDEAL framework, that would work differently because I believe now the surgical mesh for hernia doesn't have safety and efficacy data. It went through 510(k) process, and so you raise good questions that are hard to answer for our patients. And certainly making comparisons between the different mesh products is very difficult because there aren't any comparative trials. So how would the IDEAL framework inform that process so then we were able to answer those questions of safety and efficacy and comparative effectiveness?

DR. McCULLOCH: Could I just comment -- that seems to me to be essentially the same question that Dr. Graves was asking. In other words, how could IDEAL help with products that are already out there in very wide use?

DR. REDBERG: And new ones coming on the market now because how -- we need actual data to compare, and if we don't have the safety and efficacy data we need to -- for a new one coming on the market because it's substantially equivalent to something we already have but doesn't have safety and efficacy. And it does certainly relate to that --

DR. McCULLOCH: Sure. I just think this is a --

DR. REDBERG: -- and to what Bill was talking about earlier on the 510(k) --

DR. McCULLOCH: -- this is an iteration of the same debate. It's a difficult question, but it's essentially the same question. I think Carl is trying to push us towards identifying some priorities.

DR. HENEGHAN: Yeah, that's correct. I think we could spend two days and have -- two days and think about having our discussions and keep going on. We have to think of the priorities for what would we recommend as evidence requirements to industry, to patients, to us as academics. What are the bits we should be looking at?

Bruce, you may want to comment?

DR. BRUCE CAMPBELL: Yes. I mean we kept referring to registers, which have been a labor of love for me. But why do we need registers? Well, we need them for procedures because, in the U.K. anyway, we don't have sufficient coding. The health services data don't record enough for us. And there's something wrong with a lot of manufacturers' data. You know, evaluators don't trust it. Where is it? What are they hiding?

I think one of the very important things is the engagement with manufacturers in all of this, and we're talking at the moment with the Association of British Healthcare Industries about their postmarket surveillance data, what would make it acceptable to us as evaluators in the same way that we would look at a prospective to design observational trial. And the sorts of features that we are putting to them is: number one, some independent supervision; number two, knowing that they're actually looking at the right outcomes as much as they can look at a few limited outcomes and the transparency and availability of all the data.

Now, if manufacturers could produce that, they're the people -- they know how many of these things they've sold. They know every single one. It's not like a procedure that you've got to go out spreading it out and trying to find out who did them. The manufacturers know every single device that was sold. And so if they could be engaged in a rather better and more reliable way of collecting data, in a sense that would substitute for the kind of registers that we keep finding so elusive.

DR. BARKUN: I think I'm finding it's difficult to give you some recommendations now because we're still kind of getting our head around things. And, again, like if I take the example of the mesh, I'll just go with the comments through what I've heard. So the mesh -- if the meshes are in place it's -- and the mesh is coming out which is -- and it's, you know, an "improvement" over another mesh, it's a totally different issue as if it's a, you know, a metal mesh that's coming out that wasn't there before.

And what the IDEAL -- to answer the question how does the IDEAL help you, the IDEAL would look at those two perhaps differently. If the first one is something that's -- is much closer to first-in-man, the bar that it would put to say let's see what we do is different because they're much, much earlier in the life cycle of that product presumably than in the other one where there might be the first few stages are similar to what the other ones are.

So in the first case, in the case of a metal, then looking at the way that I would use the IDEAL recommendations, I'd go to the table which we have been showing you which says, you know, you have to have like a special consent, so patients understand this is something that's really new, and it's not just that you're doing a procedure or that you're assessing it; this is a new device that you're using. And the safety becomes really important. Therefore, the outcomes are primarily centered around safety -- a bunch of outcomes around safety. And then as you start having it better, and it looks like it has safety, then you start having some -- gradually go into efficacy or feasibility outcomes and how long does it take to do it and so on.

So I think it depends -- we're talking about -- we're lumping a lot of devices together. The way IDEAL will help devices is by using it by finding out where are we in the possible life cycle of this. And, in fact, you can argue that the postmarket vocabulary that's used here by the FDA can apply some devices coming in Phase I, some coming in Phase II, some coming in Phase III. It's total different. And the way you use IDEAL is by seeing where you think you are in that.

Second thing, just to comment on --

DR. HENEGHAN: Sorry. Can I -- can we come to your second point? Because I think it's really quite important and -- so your priority is to say well, let's really provide a robust framework for patients, for industry, and for people taking the research.

DR. BARKUN: If you look at people that remember well -- I think -- Wilson looking at what is -- to answer your question, what is it that makes that a device is actually used? I remember a table, I don't have it exactly in mind, but there's an issue of what does it bring to the patient, so the patient's perspective of how much benefit there is. What does it bring the surgeon or the interventionalist, meaning if it's something that easy to do, it's got a better chance of gaining acceptance than another one. And another one is how strongly it was marketed because, let's face it, that is a very, very important. And the last one is data, actual data. What we're doing with --

So to answer your question, the IDEAL principle tries to have a number of outcomes which can help each of these area. For example, for the patient you'd say in the early stages, well, it looks like it's safe, you know, for you, for people who have your -- for the surgeon we'd say it looks like it's doable and so on. And it depends on which stage you are at to bring this information.

Once you get into the more advanced stages, the level of refinement of your outcome changes. You get -- you start getting into quality of life data. You start getting into, you know, long-term data. Again, in the IDEAL what we do is describe the patients that are getting it to surgeons who are getting it, the outcomes that are being measured, and then we adapt the methodology that should be ideal, pardon the pun, for it at that stage.

And I mean one example I would say for the randomized trial to the second point I was going to bring up, can we do randomized trial from day one, from patient one? Well, this has always been an issue. And being surgeon and being involved in dozens of randomized trials, including unfortunately a number that I had to run myself, and it's very difficult to do that when a patient's got, you know, a gun to their head if it's something that's really big. But there are other methodologies, like the winner-take-all methodology, which my understanding is not totally dissimilar to a randomized trial but does use Bayesian statistics to go ahead and do that. So the question there is if the outcome is really, really strong, then you can use the methodology which is a little bit, slightly different perhaps than a randomized trial in order to get the same effect. So the key is to match the outcome methodology for the point at which you're assessing the technology.

DR. HENEGHAN: To our industry --

DR. FEINGLASS: Shami Feinglass.

So I would agree with you on setting out that framework as clear as possible. If we can set a framework coming out of this meeting that people can use to adapt depending on what stage of product life cycle they're in, what it -- which patient it is for, what outcome you are looking at, where you are trying to go with that so that you have different profiles, different patient populations, frankly different risks that you're looking at with those studies, allowing you to change that methodology to the point where one size does not fit all, I think that would be fabulous both for regulators, and again I speak as a former regulator, so Susanne, I completely understand your point of damned if you do, damned if you don't because I have been in that situation.

Now, in the industry you're looking at how can industry be flexible enough to meet the needs of the patients, the people doing the studies and the regulators. And you need different methodologies to do it. That's what I am looking for from this group. That's why I have participated in this last year. That will be very helpful to us.

DR. HENEGHAN: Can I ask you a specific question? Should it be an advice framework, so we give advice about using the framework, or if you think about jobs, you've got clinical study reports which are a reporting framework, which you say this is what we should expect to see at this level and this evidence, and this is what we want, so we get some -- one of the key things I see is that different stakeholders require different types of evidence products. So actually had a reporting framework and said this is when it comes to the FDA, but actually round the world it would be -- but this is what a study report should look like, and at this stage this is what it should include.

DR. FEINGLASS: I think it's a little bit of both. So an advisory guidance document, something of that sort, is very helpful to industry. It's also very helpful to regulators. It's helpful in that it is binding and not binding at the same time. So no longer being a regulator, I guess I can say that now. It allows people to do the right thing within a framework that's there.

There are, however, I'm certain, some things that may want to be put more in stone. I would -- I will jump over to my colleague Richard because he deals in a different space than I do from devices, and when you look at the life cycle of devices, some devices may be farther along the scale of having interesting methods that they look at to prove their point. Others may be earlier in that life cycle.

So to your point, guidance is the first step, giving some advice, some guidance, something that people can follow, and then having people move in that direction through different iterative processes will be helpful. This will not be static. I don't expect it to be static. Frankly, I think it has been static for a while, and it's time to move it forward, and we have opportunities to do that.

DR. HENEGHAN: Peter.

DR. McCULLOCH: Yeah. Carl, I just wanted to change tack a little bit, hopefully in the direction I think you want us to go. We're here because the FDA is in the middle of change, and they are confronting specific problems based on their own history and where they're coming from. And I think one of the things I'd like to see would be for this group to make recommendations to help them. And so very specifically, Dr. Maisel defended 510(k) and said we don't want to get rid of it, but we want to restrict it to a different group of products. Well, which are those? I think one of the things that isn't clear to me is, okay, if it's going to be more restricted, where is the bar going to be? Because above the bar are going to be all the things that we really want to discuss, and we don't know what those are until we know where the bar is.

And the second point I'd make, which I think would be really useful to the FDA, is the one I alluded to earlier. Seems to me that we're talking about a future system for the devices that are above the bar, where there's going to be a kind of integrated process of investigation, and there's not going to be this dichotomous world of it's premarket, it's postmarket; it's fine, you can do what you like, or you can't do anything.

So I think there's a very important role for IDEAL in looking at that situation and asking, you know, what kind of coverage with evidence framework would be acceptable, would give you acceptable evidence during that process, and what kind of integrated study design package would you say to manufacturers look, do an early study to show us that you've got a stable device that works. Then move that out into a larger study to show us what it might be capable of, and then move seamlessly into a trial.

So those are areas where I think we can look at IDEAL and make suggestions as to how it could help FDA.

DR. VANDENBROUCKE: May I play devil's advocate -- I do a lot of having -- had to build it. I mean one of the reasons we are sitting here is the metal-on-metal disaster. Okay?

UNIDENTIFIED SPEAKER: Can you say that again?

DR. VANDENBROUCKE: The metal-on-metal disaster with the hips prosthesis. That's one of the reasons we are sitting here. The metal-on-metal disaster -- fit it in the IDEAL framework because from what I hear, Jeffrey and others who made it will say well, it's not a new device or these early stages can skipped and even Stage 3 could have been skipped to -- should have been skipped with the metal-on-metal and that what should have happened is the very early detection of the failures.

So, again, when not being completely devil's advocate, it depends on what -- on the stage of development what you need and say the metal-on-metal would have needed a very early warning and -- system while other completely new things might need an IDEAL type of approach. So it would really depend on the question so to speak.

DR. GRAVES: I think I wouldn't mind jumping in here as being one of the people who actually identified the metal-on-metal issue, that I think you're completely wrong, that part of the process was that there was early identification of the issue. There was no action on that early identification, so that's one issue.

But where the problem came was that the devices were approved on substantial equivalence, and they weren't. And so it really comes down to what -- to them saying what is substantial equivalence? Because there was a mistake made that these were substantial equivalent, and they weren't. They were quite different devices, they behaved differently, and a whole new mode of failures. So substantial equivalence was the critical issue. The early identification was there, and then there was a failure to act on that early identification. So I disagree with you on a number of points.

DR. VANDENBROUCKE: Would you say there definitely better would have agreed when the representatives to us that they were so terribly different?

DR. McCULLOCH: I mean if I can jump in on that, I was going to point out that Charnley actually started this way, you know, Charnley, the inventor of the artificial hip, started with metal-on-metal and reported very similar problems actually. And they were one of his early failures, which, of course, he didn't talk very much about. But if you look into the history it's there, and they were quite clearly a completely different concept from the standard, you know, high-density plastic versus metal hip.

DR. HENEGHAN: Okay. We're going to stop. We're into our think tank, and we are thinking.

My summary there is there's a need to think about the framework. There's a need to think about a framework in a total product life cycle because some devices today are on the market that could be the next metal-on-metal. And so how in that total product, in that framework have we got the postmarket right from the first-in-man to the postmarketing the correct evidence advice so that we can still look at devices on the market today.

Why don't you all have a cup of coffee, and we've got 15 minutes before we come back. Thank you very much.

(Off the record.)

(On the record.)

DR. ASHAR: Okay. I think in this -- oh, sorry. I must not be pushing the button. That must be it.

I think in this next session we were just going to have a reminder for like five minutes of a discussion or an overview of the IDEAL framework just so that we have that in the front of our minds before we get started with Greg Campbell's presentation. And followed by -- I'm sorry.

UNIDENTIFIED SPEAKER: Jeff Barkun.

DR. ASHAR: Jeff Barkun's presentation.

DR. MARINAC-DABIC: Jeff Barkun is going to present these slides because there have been questions pointing toward the, you know, some maybe confusion about, you know, what is IDEAL framework, and they're really nice slides, and so we decided to give three to five minutes to present them.

DR. BARKUN: Thanks very much. I apologize for I guess changing the agenda, but a number of people were asking, you know, what the IDEAL is. I think it's one thing to -- we kind of presume that people will go look at the papers, go to the website, but just as up in -- on a slide as a straw man I think might be useful. And I will remind you that there are -- we found out this morning there is a website which has this and that you have the papers there.

Just in the next four minutes to explain to you what -- the purpose of the IDEAL changed over time. And what I'm going to show you is what was essentially the impression that devices in general are going to be approved, and they're going to be approved either for the right reason, the wrong reason, it's irrelevant. The fact is they're there, and data is being generated at any point. And the question is what is the best way of evaluating that device given the particular point in time in its life cycle where it is. And that's what we essentially tried to do.

This first slide illustrates -- it's from a classical paper by Rajas (ph.) from the theory of social change, the adoption of an innovation curve. I think this was based on introduction of a new type of corn in the American market, but conceptually it's been used over and over again. And conceptually it's pretty straightforward. On the X axis you have time. On the Y axis you have diffusion of technology. So in our case, that would be percentage of surgeons or interventionalists using the device or doing the procedure. And initially it seems to be a slow uptake. And then you'll get to a point where there's a critical mass which are getting adopted, people who are adopting the technique, who are starting to use it, and part of that is a number, and part of that is that these are people who are usually leaders in the industry. And then after that there's a huge takeoff, and so then you plateau.

So it seems to be that there's a slow phase and then a very rapid, rapid phase. The reason that I make this point here is that if you're trying to do, as people asked, a randomized trial, trying to do a randomized trial at a point when this is going on isn't possible. Now, I have to be very careful in what I say by impossible, because I don't think it's impossible to do a randomized trial. But it's going to give everyone a lot of headaches. There will be, you know, a lot of crossovers, and it won't be what we want. So there is such a thing as an ideal timing for a randomized trial. Conceptually it would be probably around here.

Now, the problem is you never know when it's going to take off. Case in point, this slide which is from a paper from JAMA. On the Y axis you have a percentage of cases that are done laparoscopically for this particular indication, and on the X axis you have time. And each color is a different organ being removed laparoscopically. So over here you have cholecystectomies, fundoplications, and then here donor nephrectomies and so on. And what you see is the theory social change curve that I showed you is not what happens in every case going back to the -- what are the drivers that make it that a new device or innovation is taken up by the surgeons in this case.

And so based on this type of data, it was clear that from the IDEAL group that we would not be able to base a paradigm on a curve. And that's why we came up with stages. And the fact is these stages are super imposable on the curves but at different levels. And the stages are kept -- and if you're talking about medications, the stages are totally linked to the way that you evaluate the medication and to the way that it comes to market. So there's no problem. But in this case, in what's been called the uncontrolled world of surgery, you don't know the way yet. So what you essentially do is you look at a number of variables over here, and this is a simplified table from the best table that we have, which is in paper three of the IDEAL. This is a table from paper one. So you look at a number of characteristics which help you know where am I in the stage of this particular life cycle of this procedure. And it's based on things such as numbers and types of patients, number of surgeons. Should there be a special ethical consent signed? And this is not to study the procedure but just because conceptually it's still an experimental procedure. Should you have a special consent for that procedure, and is a learning curve at play?

So looking at a number of these characteristics, you then decide where am I at in the stage of this innovation? Am I in Stage 0, 1? Am in Stage 2A, 2B, and so on. And what comes below this, which isn't here, is the actual evaluation. And so what the IDEAL does is take that curve, divide it into stages, which are much more robust than the curve itself, and then we suggest specific outcomes that should be used for this stage and specific methodology that can be used to evaluate those outcomes.

So I just wanted to give you that summary of what was essentially three three-day meetings in Oxford, and this table really, aside from the recommendations which were very clearly presented by Peter this morning, was the output of those meetings.

Yes.

DR. GROSS: Tom Gross from the FDA. Can I ask you to talk about indications that are relevant to that last slide? They're very relevant to FDA's regulatory processes. So what are the indications -- worked out for the device or procedure of interest?

DR. BARKUN: Okay. The way the -- that you're talking in FDA -- I'm going to say kind of FDA terms that you're talking about premarket or postmarket?

DR. GROSS: Well, it applies in both cases, so in premarket so a device is marketed based on indications. When it's in the postmarket environment, there could be new indications that are explored. Depending upon what those are, they may have to go back into a premarket cycle for more formal study.

DR. BARKUN: I think that's exactly the point. I think you are bang on. And I think the idea is over here, if you take a look at what we have, this is not an exhaustive list of the categories. I think one of the things that I understood from this morning from the think tank that we had is that what we want in this column is a list that is applicable to each of the different healthcare provider and patient. So, for example, here from the -- I'll answer your question in a second. From the patient's point of view, all we have is, you know, ethics which is pretty basic. And at least, you know, we thought that it was important that the patient be represented, but there would be other patient characteristics that they would look for which would be -- which had to be analyzed at each stage, and I think you're exactly right. I think here you'd have indications, and over here you'd have obviously for the earlier stages there would be a narrower type of indication. And then when there's a wider set of indications, so it comes out here, then you'd get to the point where you have to look at different outcomes and different methodologies perhaps for that.

Now, it's not to say that you're going to multiplely hit a new technology over and over and over again. But I think the idea is given that they can come in at different stages, you have to have, as you point out, the indications or the disease populations they apply to.

And I think the -- what we're talking about -- one point I wanted to make is when we talk about Stage 4 over here, this is not what we mean just by postmarket. Again, postmarket can be at any point, not just at this Stage 4 per se.

DR. SEDRAKYAN: This is great thoughts, Jeff, and this is going to come up throughout the day through postmarket to coverage with evidence development and postmarket, so I think this should be probably discussed during the facilitated discussion.

DR. MARINAC-DABIC: But we should keep this probably at some point on the screen when those times come so that --

DR. SEDRAKYAN: Yeah. Sure.

DR. MARINAC-DABIC: -- people could be reminded.

DR. ASHAR: Okay. Our next speaker is Jeff [sic] Campbell.

DR. GREG CAMPBELL: Which one of these microphones -- yeah. Is this it? All right. Thanks.

Okay. I'll try and stand where people can see me. So I'm going to talk about sort of the premarket thinking within the U.S. FDA in terms of medical device regulations and how we think about trials. And so the outline is I'll talk a little bit about medical device studies; talk about unique problems I think that surgeries pose, although I think I'm preaching to the converted here; talk a little bit about the placebo effect, design considerations for nonrandomized studies, and we do within FDA in the Center for Devices, we do see nonrandomized studies; and talk a little bit about Bayesian designs and analyses.

So there was a draft guidance document that came out from FDA in August which concerns pivotal clinical study design, the title of which is "Design Considerations for Pivotal Clinical Studies for Medical Devices." It's available on the website. And what I want to do is talk about a few of the concepts in there which are related to, I think, the IDEAL thinking in terms of staging.

So that guidance document talks about three stages. It talks about an exploratory stage for clinical investigations in the premarket. It talks about a pivotal stage in the premarket, which is the definitive study. And it talks then in the postmarket about the postmarket stage. So those are the three.

And the first is the exploratory stage. And I'm pretty sure that that includes Stages 0 and 1 in IDEAL and maybe 2A. I'm not sure. And that the pivotal stage in premarket is either 2S, if I got the lingo right, or 3, depending on the situation. I'm not sure.

Okay. So the early feasibility guidance, and some of you I think have that in front of you, is a guidance document that came out quite recently where the FDA is trying to think about the first-in-human studies for medical devices as well as early exploratory studies, where it's not clear that the device design has been finalized. I mean that's sort of the idea. And so there is this iteration about, you know, is the device right; is the procedure right; and all that's getting worked out. So that came out less than a month ago.

This draft guidance on pivotal clinical studies identifies three kinds of devices: therapeutic devices, which I think we all understand; aesthetic devices, and FDA regulates aesthetic devices like wrinkle fillers and things of that ilk; and then diagnostic devices. And for the most part, I talk about diagnostic products here. Although there are some surgically implanted diagnostic products, the problems there are, in fact, quite different, although I should say that about a third of the products that CDRH sees are diagnostic products.

Okay. So the guidance document on pivotal clinical study design talks a lot about bias, and I guess that's because statisticians like to talk about bias and so -- and clinical trialists do as well. So it's a systematic error in the estimate of a treatment effect. And the objective in any clinical study is to eliminate it if you can, reduce it if you can't eliminate it, and understand it and estimate it if neither of those first two work.

So what I need to talk a little about is sort of how devices are regulated in the United States, and I'll talk about it in terms of the PMA, and I will not talk about it in terms of the 510(k). The pivotal clinical study design guidance document addresses mostly PMA thinking, although some of the thinking does apply to 510(k)'s for which there is clinical -- clinical data is needed. Okay. So the regulation is quite broad. So we rely on valid scientific evidence to determine whether there's reasonable assurance that a medical device is safe and effective. And, in fact, the law and the regulation goes on to say exactly what constitutes valid scientific evidence.

Our drug colleagues in the United States start and stop with that first bullet, well-controlled trials. The phraseology in CDER, in fact, is adequate and well-controlled trials and in the plural, meaning that any drug needs to be studied in two Phase III trials.

But our law, our regulation is much broader and allows for partially controlled studies, objective trials without matched controls, well-documented case histories and reports of significant human experience. You can drive a truck through that.

Okay. So let me talk about well-controlled investigations to begin with. And, in fact, our regulation for devices in terms of effectiveness says that it should be principally from well-controlled studies, which is quite interesting. So in terms of safety, there's this broader expectation, but principally we should be talking about well-controlled studies. And so the issue is what's the control? And the regulations and the law go on to list the four different kinds of controls, active placebo, the no-treatment control, and the active and the historical control.

So I start -- I did this at the last minute, and so I didn't have lots of examples of nonsurgically implanted devices, but there are a bunch. So things that you don't require surgery to do an implant, or something that sits on the body but is not surgically implanted are examples. There are a lot besides contact lenses. Whether percutaneously delivered products are surgical or not, I don't really want to debate partly because I think the issues are very similar anyway. So there are lots of --

But what you need to know is that for a lot of Class III products in the United States that come through the PMA route, in fact, those are surgically implanted. And you see here a list. It doesn't include a lot of the other devices that have been mentioned this morning. But you see it's quite broad.

Okay. So I want to talk about surgeries. And part of the issue is the device may be implanted through a surgical technique, so that's one kind of surgical situation. The control device might also be surgically implanted, and so there's a surgery there. Those might be different surgeries, probably are. You could have a situation where the device is not surgically implanted, but it's being compared to something that is, and there's a misprint here. I know that for carotid stents the comparator is endarterectomy and not CABG, and that shows you how tired I was when I wrote this. Okay. But surgeries are an issue in many trials, and that's sort of the point.

Okay. Complications, while everyone knows that surgeries are reversible, many implants are difficult to remove. One of the worries everyone understands here at the table is the skill of the surgeon turns out to be important. Many of these surgeries are not standardized, and so you need to worry about, you know, protocol, describing exactly how that's done. There's a learning curve up. This has come up this morning a bit already. You don't want surgeons to be master blinded usually. It's not a good idea. But you might want a third-party assessor. But that may actually be impossible as well because a third-party assessor may be able to visually see what kind of surgery people got.

There is the issue about how willing participants are, either as investigators or patients, to participate in a randomized trial. And there's the issue about the long-term effects of implants. Implants don't go away when pills, you stop taking them sometimes, the effects vanish, and that's not always the case. So there are longitudinal issues here having to do with does the device continue to remain as effective, continue to remain as safe over time.

Okay. I'm probably -- the placebo effect in surgery is well known. People worry about it, and the view most of us have is that it's an effect at least as significant as you see in the placebo pill world, in the drug world, and that there's some literature that suggests that the more complicated the procedure is, the larger the placebo effect might be. Ted Kaptchuk at Harvard and some of his colleagues have written a number of papers about this.

The other issue is well, is it really -- is surgery really inert anyway? I mean maybe there's an effect even if you don't do anything but pierce the skin. There's the famous NIH trial about ten years ago about fetal tissue transplantation where people had holes drilled in their skull without any delivery of therapy. So there are ethical issues there.

This is not a medical device, but an interesting example, which I think is well known now by Mosley and colleagues having to do with knee osteoarthritis, three arms in a randomized trial, 60 patients per arm, and two years of blinded follow-up and unable to tell any difference in the three arms. So the question about placebo surgery comes to the fore here because, in fact, one of the three arms was simulated arthroscopic surgery.

Okay. So there are lots of situations where it's a real challenge to figure out what to do. One is a design example for brain stimulation. So everyone gets the implant, but half of the people randomized do not have it turned on. So for studying things like deep brain stimulators, so that's one way to try to address some of the bias problems.

We see a lot of situations where, in fact, there are nonrandomized controls, and there are really two kinds. One is a concurrent, nonrandomized control, and the other is the historical control. Okay. And there are some interesting things that I think can be done from a statistical point of view with those. And what it tends to involve, for historical controls, of course, there's temporal bias which is the last bullet there. The propensity score approaches, though, suffer from the middle bullet here which is you can adjust for the things you see, but you can't adjust for the things you don't see. And the things you don't see or the things you don't measure in a randomized trial are taken care of, but in a nonrandomized study they're not. So that's a problem we're all familiar with.

But the statistical methodology, nonetheless, allows for thinking about causal inference, what would have happened using propensity scores. And we've had a lot of success asking companies to think about using propensity scores in nonrandomized studies. And quite frankly, there's a risk for a company to do that because they aren't going to know until they recruit the patients whether the patients are sufficiently similar in the two groups, and that's a challenge but one that we're willing to talk about.

Okay. We also see observational studies, one-arm studies without a control, and we use objective performance criteria, OPCs. Usually this is in a situation where we well understand exactly the product. We have a lot of data. It's usually based on very complicated statistical analysis and can be used for safety or for effectiveness or for both. Some of the good examples in that regard are heart valves and intraocular lenses.

The agency also uses performance goals, and both of these are talked about in the draft guidance on pivotal clinical study design that I mentioned at the beginning. And the concern is for these sorts of things, for performance goals and really for OPCs as well, where do they come from? You know, who suggested them? Did the company suggest them? That's probably not always a good idea. It's not clear the FDA should be suggesting performance goals either. It would help if they came from medical societies, for example. But the fundamental question is if a company has a performance goal, and they satisfy that performance goal, do we then think that we have adequate evidence that the device is effective if it's an effectiveness goal or safe if it's a safety goal? And that's the conundrum.

Okay. The learning curve, there's a lot of literature on the learning curve. You know more about it than I do. There is this 80-page document by Ramsey et al. on statistical approaches to worrying about the learning curve. One of the issues about learning curve, and this came up this morning, is should you randomize from the first patient, or should you design a trial where there are a few patients that the surgeon gets to try the device without being in the -- without counting toward the, toward the evidence for the confirmatory or pivotal study.

There was issue about the variability of the surgeons. Surgeons differ. We know radiologists differ in their ability to read mammograms. Surgeons differ as well. One way to think of this is from a statistical point of view or a design point of view, to think about nested designs where patients are nested in surgeons, nested in centers, nested in countries, for example.

In terms of analysis, what that breaks down to is the components of variance analysis or in terms of thinking about the generalizability of the study, you don't want to generalize to just those surgeons. You want to generalize to the entire population, and now the statistical lingo for that is to use random effects model, which allows for the variability of the surgeons to be built into it.

So in the last couple minutes I want to talk about Bayesian statistics. In 2010, FDA finalized a document on the use of Bayesian statistics in medical device clinical trials. There is a recent article that appeared in the Journal of Biopharmaceutical Statistics, which is a history of all of the publicly available products that have been approved at CDRH based on this Bayesian methodology. Interestingly, almost all of them are implants. And many of the implants are surgical ones as well.

The idea of Bayesian statistics, for those of you who don't know, is to -- the document talks about two different approaches. One approach is to use prior information. Lots of times in a study, people have prior information, and the issue then is how do you use that prior information, and the guidance document suggests that you have previous clinical trial data that is on the same device or perhaps on a very similar device. And then you build a hierarchical Bayesian model that borrows strength, depending on how similar the current study is to the prior data and one or more previous studies. So we've had a lot of success with that.

But the other kind is where you don't really have any prior information to start with and that you learn during the course of the trial, and you use the knowledge that you learned during the course of the trial to adapt the trial basically. It tells you how long a trial should go. It tells you when to stop and so on. And so one example is a spinal fixation device implanted surgically, and usually the endpoint we look at is a two-year endpoint for that. I mean we'd like to study devices for more than two years, but two years seems like a goodly amount of time in the premarket. But one of the intermediate endpoints is the one-year endpoint. And although one might not want to use that as a surrogate for approval, you could certainly use it as you could build a model, not knowing how good the intermediate endpoint is, and then based on that model stop recruiting when your posterior probability got to a sufficiently high level when your predictive posterior probability got to a large enough value, and then be pretty confident, when the data rolled in a year later, that you were good to go.

So we've had actually a lot of success with that, and there are other adaptive approaches. Certainly treatment adaptive randomization is an idea that fits in quite well with the Bayesian approach. It's not used currently very often within the medical device industry.

So, in conclusion, what I want to leave you with is that surgical implanted medical device studies pose some unique challenges, and I think everybody around the table understands that, in terms of design and in terms of analysis. There are lots of design innovations that can be leveraged for the device world, and although it would be wonderful if we could always live in a world where randomized clinical trials were the one and the only way of getting to market, the reality is quite different. And, in particular, things like propensity scores, learning curve methods, Bayesian statistics, and adaptive trials are some innovations that we think can help in the medical device community.

Thank you very much.

(Applause.)

DR. ASHAR: Rather than pausing for questions, I think we'll go on to the next -- have Peter talk and then have our facilitated discussion.

DR. McCULLOCH: Okay. I apologized before. I'll apologize again. Here I am. I know that from the foregoing brainstorming discussion, there's a sense of discomfort in the group. That's absolutely normal. At this stage in a workshop of this kind where you've got a big, open forum, I'd expect there to be quite a bit of discomfort about what is it we're trying to do. I therefore want to give you as much time as possible to do more brainstorming. I'm going to try and shorten my talk to do that. A lot of what I had to say could be summarized by saying I agree with Dr. Campbell.

So I just want to talk about the IDEAL -- the early phase studies from the IDEAL perspective. So I'm talking about the I, the D, and the E here. And I just am going to focus as quickly as I can on some of the problems that our recommendations throw up because we're going to meet these, and I'd be interested on other people's perspectives on them. So this is just saying IDEAL -- what we've designed really are a set of thinking tools. Some of the tools go a bit beyond that. Some of the recommendations we made are so precise that I think they're a kind of hypothesis saying this is a good study design. We don't actually know that because these are not evidence-based recommendations. These are based on expert consensus conferences, Level 4 or 5, so they need testing. And there are barriers.

But let's look at the first-in-man recommendations. Okay. We went through these earlier. We need to register. Registration should contain enough information that others could follow it. It should be a professional obligation. And this one, anonymous registration, should be an option. Now, you can see the advantages of that. If you do something that you thought was a great idea and turns out to be disastrous, it's very useful if your colleagues know about that and don't have to repeat your mistake. However, there is a big problem. There are two problems here. I'll talk about the other one first which is that -- it's not so much a problem as an opportunity, I guess. Again, in a post-510(k) world, getting registration may actually be a good thing for a commercial developer to have. That could mean that someone like the FDA could say well, fine, you're getting a benefit out of this. We'll make you pay for the register. But what about people who don't have the money to pay for the register but have a great idea and have done the first-in-man study? That's a minor problem, I guess.

Slightly more serious is this one. If you had an anonymous register, and it was really anonymous, it would be filled in no time by junk and spam from people with extreme opinions and odd ideas. But if you registered who was putting stuff into the register, then the legal profession would be on to you in no time because they would find a way of making this legally discoverable and destroy exactly what you tried to set out.

I have had some discussions with people who are involved with aviation safety in Britain about this because they have a good system called CHIRP which allows pilots to report mistakes they make. And now the obvious difference there is if a pilot reports a mistake, it means he's still alive, so it wasn't that bad a mistake. And the difficulty pointed out by the pilots is that therefore their register is only based on things that didn't actually cause harm.

However, they have a similar problem with legal discovery, and they have found that they can put up quite an effective firewall between the anonymous registrant and the lawyers who want to know who he was and where he was by having a sufficiently respectable reporting authority that's willing to say this is confidential between us and the person who reported it to us. And I don't know who would be the appropriate person. These are just sort of wild suggestions, really, about who that might be in different countries.

Okay. Another set of problems. These are the recommendations for the development, the Stage 2A group. So the study should be prospective. They should publish a protocol. They should record all the cases. They should provide clear information about when you changed the device and/or when you changed the patient selection criteria and what happened after that. And we shouldn't use a case series unless we have to.

What are the implications and difficult issues here? Well, again, depending on the environment that's created in the new world of FDA, it may be that commercial considerations make companies that have devices unwilling to publish the details. But if there's something in it for them, maybe they'll be very willing.

The use of functional endpoints, i.e., does the device do what it says it does, doesn't answer the safety question, but that's the same situation we have now.

The requirement to record information on every consecutive patient, including the ones you decided not to use the device in, is very difficult to police. And, of course, there's a very high risk that you're going to make a Type 2 statistical error by jumping to conclusions about associations, about changes in the device and changes in the outcome. But that's an absolutely inevitable part of the mental exploratory process in this phase of research anyway, and some of your guesses will turn out to be right.

So this is prospective development studies, which is in red because it's our very prescriptive suggestion about how you do this. It basically comes down to more or less what I've said with a couple of little knobs on. One, we've suggested the use of the statistical process control methods that Dr. Campbell referred to so that people can follow sequentially what's happening. And the other is a suggestion which Richard Lilford made some years ago about the use of Bayesian statistics to develop an iterative analysis of the process, which again is similar to some things Dr. Campbell was saying.

Now, this is interesting. This is the data that has been referred to earlier. The consultative document the FDA released on the 11th of November for so-called early feasibility studies, and I really sat up when I saw this because there are really very close similarities between what we've said for prospective development studies and what the FDA have come up with for early feasibility studies, and I'm delighted about that. It's great that we're sort of singing from the same hymnbook.

All of these bullet points are more or less exactly the same. It's for use for an intervention. It's not yet stable. It's small prospective studies focusing on function. Frequent iteration is accepted. Therefore, you need speedy regulation. And we have slight differences. We recommend every patient gets recorded, and you know what happened. They say well, we might allow nonconsecutive patients. We can debate that. And we've got these extra statistical suggestions.

This is just a further quote about the easy -- early feasibility study because I went through the document in some detail and just listed the stated purposes and assigned them to different stages in IDEAL. And you can see they're all D or E.

So that's two sets of difficult issues. Here's a third one. This is our recommendations for the exploration stage. A study should be prospective and controlled. Standardized terminology is critically important, and again Pierre Clavien has been a leader in helping surgeons to develop standardized terminology over unwanted outcomes of surgery. Cooperative studies, the idea here is that we register all the cases together in a common database, and that helps us. It helps us to define what the intervention is, because there's often disagreement about that to start, to what quality control is, what the learning curve is and when we should say right, that's enough of that. We don't need to worry about it anymore. And most importantly to come to an agreement about what the question is.

So this is the Phase IIS study which again is just a more detailed version of that. Just to point out, this already has been done. This study of gastrectomy in Italy was done some years ago using exactly this design very successfully.

However, more problems. For devices, the main actual considerations are going to be the indication and the learning curve, and you're going to have a three-way conversation like this. But somebody has to standardize the terminology. Who should it be? I don't think myself it should be regulators. I agree with the comment earlier that it should be professional societies or bodies.

Development in exploration stages, as we've seen in looking at the early feasibility study, in practice they overlap. And, therefore, study design may have to be a bit of a mix and match.

There's always going to be pressure to randomize early. We've heard the arguments of that from Richard and the counterarguments from others, and this balance comes down to scientific respectability versus learning and the practical political considerations of actually getting a study done. But this is a different one. Supposing you had a new world of coverage with evidence where you were allowed to market your product so long as you put all the patients into one of these Phase IIS studies. Well, you could market your device forever so long as you made the study bigger and bigger and bigger. So there would clearly have to be some limits to the size and duration of such studies. Otherwise you get --

So this just basically summarizes what I said at the beginning, that I think IDEAL is very well suited as a template. And, again, if you want more detail about what Jeff was talking about or what I was talking about, check the website.

And that's enough. Thank you.

(Applause.)

DR. ASHAR: Well, I think from here we can start our facilitated discussion.

MS. RAYNER: (Off microphone.) -- optimal time on the agenda but -- to be here today --

Thank you. Can you hear me now? Great. And I'm sorry. This placement is great for me to see those of you in front of me, but a couple of you are going to be able to check how well I combed my hair this morning from the back, and I apologize for that as well.

As I said, I'm Anita Rayner. I'm the Associate Director for Policy and Communications in OSB.

And would you like to introduce yourself?

MR. BARTH: Sure. I feel like Anita and I lost a game of duck, duck, goose, and we're just here in the middle. I'm Abram Barth. I'm an attorney in the Office of Chief Counsel, and I mainly focus on devices, combination products, and human subject protection in clinical research.

MS. RAYNER: Thank you.

This morning has been very stimulating and somewhat daunting as this august group looks to completing its task over the next couple of days. And our session is entitled "Optimizing an Integrated Total Product Life Cycle for Devices and Procedures." And I think you can fit just about anything under that title. But I want to start by picking up on a thread of a conversation that I heard both in Peter's discussion just now and in the earlier discussion where we talked about the indication and the indication for use because I think that's really critical from an FDA standpoint because that's the measure upon which we make our determinations based on the indication, and the indication drives study design, drives data collection. So I want to throw that discussion back out at you.

Yes.

DR. HENEGHAN: Yeah, thanks. Greg, I thought that the first presentation was really interesting and about trying to tease out the problems with randomized trials and their design. I face this all the time in interventions I look at, and there's a whole host of evidence on pragmatic trials, that if you actually go to the pragmatic trial literature, all of these have been discussed over about 12 or 14 years, and in fact there's a number of good articles in the BMJ. And pragmatic trials are intended to be trials that are more reflective of the use of the interventioning practice. So we're to find all that issue in primary care of trials and they are about -- they have issues of blinding, and that's been discussed. They have issues of randomization, like at which level should you do it? At the practice level so as a cluster? And I think if you actually brought some of that into this design analysis, you actually might find there's lots of useful stuff in the pragmatic trial world that will help inform some of the debate.

MS. RAYNER: So I'd like to challenge some of my FDA colleagues, perhaps, to don't be shy, and jump in here in terms of our experiences and even with some examples of how that type of approach might fit in within our current regulatory approach, or if there are impediments within that -- how we might work to change those.

DR. GREG CAMPBELL: Let me jump in for just a second and say that it's interesting in the United States that the drug regulation talks about efficacy. The device regulation talks about effectiveness. Effectiveness is a broader concept than efficacy and tries to get at a little better, I think, how the device or medical produce would be used. It doesn't go all the way to be sure, and I think pragmatic trials are certainly an interesting idea, and related to that would be large, simple trials, I suppose, in particular.

DR. HENEGHAN: Yeah, and they're some very important points. I just jotted down a few, for instance. So if blinding is not used, you explain why that's used. And so that's a simple reasoning in the reporting standard of pragmatic trial that's in play that may help when people are saying well, it's not just used.

However, when you don't use blinding, then there's a sort of suggestion that you should use important outcomes. You may use outcomes that are objective to overcome that. And so that's the way of thinking of the balance of the tradeoff of losing the blinding versus gaining an additional way of looking.

MS. RAYNER: I'm sorry. Go ahead.

DR. LILFORD: I think -- I don't know if this is quite on the previous point or not, but it's been worrying me from two talks now, and this is the question of the placebo effect, which I'd like Greg to comment on from your excellent talk. In that talk, I wasn't convinced what you called the placebo effect really was a placebo effect, and perhaps even more so in the talks early on this morning, but it could be a regression to the mean and because it's quite --

DR. GREG CAMPBELL: No, I would include -- yeah.

DR. LILFORD: -- it's quite an important distinction because a randomized trial will get the problem of regressions to mean. It won't get around the problem of a placebo effect.

DR. GREG CAMPBELL: Well, I think, Dr. Lilford, that's a great point. I guess I would put, under the placebo effect, regression to the mean as well. Whether we can surmount that within a clinical trial is not so clear to me if what you're recruiting are patients who are severely ill. By satisfying the inclusion criteria, they may seem to be less ill as you study them even with no therapy. And so even in randomized trials, regression to the mean --

DR. LILFORD: The control group should also regress to the mean then, shouldn't they?

DR. GREG CAMPBELL: Well, that's right. Of course. Yeah.

MR. BARTH: In the Stage 1 paper, there's talk about device development and there being modifications to simple tools or revolutionizing science or revolutionizing equipment. Under FDA's device development framework, we mainly focus on changes to intended patient populations, device modifications, or new devices. Can you explain how IDEAL's Step 1 framework would comport with FDA's system or European framework or what modifications you would recommend in light of the IDEAL recommendations?

DR. BRUCE CAMPBELL: Shall I come in there since there's silence? I mean my reading, reminding myself of the paper we wrote at The Lancet, was that the definition of the initial groups of patients and the indications, which do need to be very clear -- and I quite agree. They're so often obscure and muddled and not well defined. Those need to come in in the D bit, the development. You know, does it work? And the next thing is right, is it what group of patients are we going to use it in? And those, I think, should be the initial studies.

And then you come to the next point in the E bit -- sorry, the A bit, I beg your pardon, in the stage where you're coming to the assessment. At that stage, one will start looking at seeing actually are there any other groups? But I think the initial group has to be set at the D stage and the exploring of other groups at the A stage.

And I turn sort of to coauthors like Peter McCulloch to see is that correct?

DR. McCULLOCH: Yes.

DR. BRUCE CAMPBELL: Just one little observation of human behavior. You ought to feel very proud in the United States. Peter, having just re-entered your country, you'll notice he didn't cross that red line once when he spoke.

(Laughter.)

MR. BARTH: Okay. Maybe this is too broad, but maybe it will ignite some more discussion. In law, when we have a quiet panel of judges, it's sometimes seen as good, but here I think we would like a more active participation, so maybe this will spark some discussion. In light of the IDEAL recommendations and the IDEAL model, and considering FDA's regulatory framework and Europe's regulatory framework and Australia's, what would be, from the IDEAL perspective, what would be the premarket regulatory infrastructure that would be most desirable to achieve IDEAL's objectives?

DR. HENEGHAN: Can I -- I'll comment. I mean the real tension for me and I guess -- it's a tension between how much do you read the advice as vague, so there's lots of scope for maneuvering, which whenever you develop this I can say it's vague, and I can input lots of methodological and ideas into that and say, well, actually here's -- I think there's a problem with it being vague for industry because it's open to interpretation, and they need a lot of skill to help them. And who's going to provide them skills to say this is the right design to do? This is a document that actually goes more specific and says at this stage this is what you should be actually doing. This is what you should be thinking. This is a sort of application. And I know there's a tension with that because of the innovation issue to market.

But I think -- you tell us from the FDA, which bit do you want on -- do you want to stay on the vague bit, which I'm quite happy with, or do you want us to start putting in real detail-oriented stuff that we think is important?

DR. SEDRAKYAN: Maybe I can add something to this discussion about -- Peter talked about this anonymous reporting and registration issue, which I mean obviously FDA has IDE process for it. So any time first-in-man use is considered, right, the industry will need to file an IDE. So in some way that has been captured. The registry of first-in-man use is captured through FDA regulatory process currently. What is not clear to me, when FDA requests data from industry, if all worldwide data is being presented, if industry is required to share everything that they have worldwide, first-in-man or say industry can pick and choose which ones they can report. So -- and it would be important to know if say in Australian regulatory environment or U.K. there is a similar process of IDE so that the information can be shared across regulatory agencies for these recommendations to take place.

DR. LUDGATE: I think one of the things that was brought up that I think is very interesting that we've met several times is this question of when a trial goes badly wrong for a reason. And then you have another trial come up maybe a few months later, which we've had recently, which is based on the same technology, but because of confidentiality we are not allowed to say actually that's been done and it's, you know, it doesn't -- and there's no good scientific basis for going back and saying, you know, you shouldn't be doing that.

And I think we have real difficulties here. It may not happen very often, but there is a real need for failures to be documented somewhere. It is a problem. It's one --

DR. LILFORD: -- we can do a trial and then have to put in the public domain?

DR. LUDGATE: They don't have to do it.

DR. LILFORD: They're not --

DR. LUDGATE: No.

DR. GROVES: That's unethical, isn't it?

DR. LUDGATE: I'm sorry?

DR. GROVES: It's really unethical.

DR. LUDGATE: Well, I mean I agree. But that's what happens and --

DR. GROVES: How do you feel if you're the patient who goes into the next trial which shouldn't be done on that basis?

DR. GROSS: Well, maybe Abram, I don't know if you could speak to the legal aspects of this? We're talking about intellectual property, confidential commercial information. These are protected if you talk about devices, and I think there's an important distinction here. If we talk about procedures, for example, gastrectomy, we could develop a different sort of research infrastructure versus devices where FDA does have a regulatory structure in place. I think those are two very different animals. And I understand with procedures like stenting, or devices like stenting have procedures as part of their deployment, but we're talking about procedures that stand outside of device development. So those are key things that I'm not hearing which tracks you're on. So I don't know if could talk about --

DR. GROVES: Can I just say that wouldn't help you with the metal-on-metal hips because the procedure will be hip replacement. So if you look just at the procedure, you're none the wiser. What you're looking at is a different device, and that's the information the patients needed to get much sooner than they got.

DR. GROSS: Yeah. I mean hip implants involve a device, and they involve the surgical technique. But gastrectomies, aside from I guess using the scalpel or such, is mostly focused on procedure. And so that, I would argue, could be much more in the public domain when it comes to first-in-human studies because, again, some of that information is very protected, at least in the U.S.

MR. BARTH: Right. I think two points to keep in mind are FDA is partnering with HHS in developing clinicaltrials.gov, which will make public a lot of information, anonymize information and report the results of those investigations, and another is that when an IDE is submitted, it requires that prior investigations be reported, so FDA would have known of the results of that. But there are some confidentiality and disclosure issues that we'll have to be sensitive to.

DR. REDBERG: Are you saying that results would be reported in clinicaltrials.gov? Because they're not now.

MR. BARTH: I'm not entirely familiar with their structure, but I think in certain applicable clinical trials, some of the results would be available.

DR. REDBERG: Because the issue is that the negative or the bad trials don't get published and don't come to light, and the FDA regards those as confidential.

DR. FEINGLASS: Yeah. I mean clinicaltrials.gov will list whatever trial is being done if it's for a regulated purpose, or people want to publish from it. So all industry will contribute to clinicaltrials.gov as that paradigm exists. But to Rita's point, you won't -- whoever is doing it, you may not hear the full results of that if the trial was stopped. Now, you certainly can search those to see if there was a trial entered and call the distributor or call the manufacturer to find out why it was or was not finished. But the actual results won't be in there. Right, Rita?

DR. LILFORD: Might just be one of the things that we take away from this meeting today. I -- there's a famous person in South Africa and Australia called Germaine Greer, and she said, "Don't complain. Organize." So I mean I think most of the members of the public here, that they would be outraged if they heard the trials were done and that the information was then not put in the public domain.

DR. HENEGHAN: So let's just check the clarification is for drugs, when you're in clinical trials, start within one year you are to -- are finishing the trial --

UNIDENTIFIED SPEAKER: It's the law.

DR. HENEGHAN: -- the law is to publish the results. And you don't have to publish it in a journal, but you have to publish it on a website that's available and transparent. That's not the same for devices so your trials -- will just be exactly the same, or you'll catch up in ten years and say well, now we need to make them -- so if you're going to do it, you might as well say we've learned all this in the drug world. Why not start to think well, it should be once you're registered, mandatory to post the results one year after the last trial recruitment? That's exactly where we can -- so devices is very parallel to drugs here.

DR. VANDENBROUCKE: I'm not certain that's about parallelism, especially in the stage Stage 0 that we started the discussion with. What's the equivalent regulation of Stage 0? From what I recall from our Balliol discussion, Stage 0 was invented because that's where the innovation takes place. Innovation to make a new drug doesn't take place in humans. You're not going to say the next patient I'm going to change the hydroxyl group. Okay? In surgeon, you do have some innovation. For some innovation, I recall the examples we discussed is the hand of the surgeon that's being forced. He does something against the rules, and gee, it works. And then he has another patient, and he does it against the rules. It works again. And so world continues.

In other instances, there are animal experiments done before so they're different, but that's all Stage 0. It's the stage of innovation, which in the pharmaceutical world happens during studying receptors and in animals. So that stage of innovation is different. That's why we made it as Stage 0, but I'm not certain whether there is a regulatory equivalent to Stage 0.

DR. ASHAR: Yeah. I just wanted to add I can't really speak to the fact that, you know, these early results of failures aren't published. I mean there is probably a number of medical/legal reasons that this information isn't available but probably should be. What I can offer is the fact that with all of our Class III devices, I don't think it's widely recognized that the summary of safety and effectiveness information that FDA labors over, looking at line-listed patient data is available online, and oftentimes this is not considered in the peer review published literature. Only if the investigators choose to make a publication from the data does it appear. And it's also -- I don't believe it's also considered in reimbursement issues that help guide treatment care path for various patients. So that would be something that I think all of us could take back to our groups and consider as we construct future clinical trials because not only does it give us the result for the device at hand, but it gives us an understanding of the caliber of the study that was previously done so that we can improve upon it in the future.

DR. MARINAC-DABIC: I think that's a great point, Binita, and I think in some of the latest rounds of systematic literature review and evidence synthesis, we've started actually adding the data from the summary of safety and effectiveness to our models and just recently published a, you know, paper on orthopedic devices was one good example of it.

But I would like us if we can maybe go back to how to raise the bar in terms of what we ask in the strategy or in the premarket and what the industry is thinking about doing in the future. And as much as we are trying to raise the bar, our colleagues from industry are trying to do the same thing. So I think it would probably work well to go back and think a little bit about the context, the precedents, what, you know, level playing field, how the science fits into publics of that and how to we're making -- how we can -- how this group can actually add more to as a recommendation to what FDA already has as a framework. But keeping in mind the very sensitive issues that I just outlined that, you know, there are certain frameworks and paradigms to be utilized in the past and recognizing that there are -- there is room for improvement, and without that recognition we would not be convening a group such as this one to do that.

So I'd like to hear some thoughts on that. Where are the gaps? And primarily from our colleagues, you know, in FDA from premarket, are there -- do you identify some areas where a recommendation from the IDEAL group can actually help to strengthen what we already have?

DR. BRUCE CAMPBELL: While people are thinking about that, can I just drop something else in which may be so obvious, that's why it hasn't been said. But in the same way as Anita pointed out at the beginning of her introduction of this, that we do need indications to be very, very carefully defined. As an evaluator of evidence, one of the other things that we find very difficult often is not only are the indications obscure and mixed, actually the outcomes aren't what we want, and we see lots and lots of studies which have some outcomes but not actually what we want to know. And I think somehow in all of this we can get tied up in the methodology, but before we even get there, what are the two or three critical things we need to know? Somehow manufacturers and others need to be clear about that. We're the evaluators and regulators.

DR. GRAVES: I think that -- Stephen Graves. I think that that raises quite an important point, and shall we sort of mention that industry needs direction and, you know, I'm sort of hardly loathe to raise it, but do we need to, from the point of view of clinical trials and clinical testing, need to have clinical testing defined standards which are processes specific. And I can see that the problem with that though is that once you have the standards, to change those standards is such a slow process, there needs to be a system where those standards can evolve and develop with industry as industry says no, we want to raise the bar, or regulators say no, you want to raise the bar, this bar needs to be raised in this area. So there needs to be a flexibility in there, and I think part of the problem with the current system of regulation is it's really quite stiff, and it's really quite difficult to bring innovation in the regulatory process.

DR. HENEGHAN: Can I -- and that's an interesting point. I want to draw a sort of an analogy with it, a slightly -- if you look at the blood pressure industry, device industry, and we've studied this, they've got minimum performance characteristics of blood pressure devices through the British Hypertension Society, European Society of Hypertension, and the AAMI in the U.S. And so any new blood pressure device now go through their performance characteristics. And if you study devices that do and don't have them, boy, are the ones that don't have them pretty poorly performing, and the influence of having a device that doesn't actually measure your blood pressure is catastrophic for a healthcare system.

And so by recognizing that minimum performance characteristic, the societies have got together and said well, so in a blood pressure machine, you need to do it in 80 clinical people, and we've got devices here you're saying well, it's really much more dangerous than a blood pressure machine, and you're going to do it in five or eight people. So allowing societies to bring performances -- and actually that's improved innovation in blood pressure device, not stopping it, and it's allowed them to publish that area and bring them standards.

DR. GROVES: There's an international collaboration which has really done a lot of work now called COMET which is Core Outcome -- hang on. Core Outcome Measures and Effectiveness Trials. And I just searched their website using the word "device" and the term "hip replacement," and at the moment they don't have any core outcome measures for those things. But they're seeking collaborators. And so far there are a lot of core outcome sets here which have been put together by specialists, by patients, and the idea is that whenever you're going to do a study, a trial, looking at a particular condition -- say it's rheumatoid arthritis -- you will always include certain outcome measures. Of course, you'll have your own as well, but you will always have this core set. And they're driven by real clinical need and what -- the information that patients and doctors need and hopefully regulators and health technology assessors. So it's a very important initiative, COMET. So anybody interested in working on these core outcomes, why don't you get in touch with COMET?

MS. RAYNER: (Off microphone.)

DR. LILFORD: (Off microphone.)

(Laughter.)

DR. McCULLOCH: Could I -- I wanted to just actually ask Pierre-Alain Clavien if he would mind commenting because his work on developing standardized outcomes for complications in surgery is very much aligned along the same lines as the work of COMET, which is an initiative by Jane Blazeby I think in Britain.

But to put this back into the perspective of what we're talking about, the questions in my mind are what are the outcomes that a regulator needs? You need a list of standard outcomes for safety and a list of standard outcomes for effectiveness, and they're different. And who is going to do the defining? Because I'm -- I mentioned my own opinion on this. But the FDA, for instance, might have the opinion that they're the best people to define exactly what is meant by the various outcomes and not the professional bodies.

Pierre, do you want to say anything?

DR. CLAVIEN: Peter, I don't know. I mean the system has been designed mostly to evaluate surgery, but of course it can apply to devices. And mainly the first in terms of definition, there was a failure to cure in terms of the negative outcome or the complications. So the failure to cure will go more in the efficacy area. So you put a device. It brings absolutely nothing. It doesn't cure. You do surgery for cancer. At the end, you let cancer be -- so don't help. I mean this is this area. And then the complication is a relatively simple scale, which take into account what is required to correct the complications. So if you have a small infection in device, you give antibiotic. It's not the same thing that going back for surgery and anesthesia and taking -- and take out the device, et cetera.

So these grading systems have been used relatively widely. In fact, three years ago we published. Jeff was the -- author of that internationally reevaluation of that. That has been now used in more than 1,000 studies for surgeon, mostly in the field of HPB surgery, because that's where we come from, but also, you know, for example in Sweden the large database with cancer, we've adopted that. So with Scandinavia devices into database in order to reach some outcome measure.

I know it's that's the last one the definition you want. Certainly not. I think you -- maybe you -- so for -- or something to decide how we will monitor and the outcome. But this one that has been used by default because there was nothing much as to evaluate that. We get for many years mortality. Fortunately, too, the mortality is no longer a very good endpoint, so you need something better, more -- that's what that target.

DR. SEDRAKYAN: Anita, I just wanted to say we can continue because it's a box lunch, and this is a very exciting discussion. So if you want to go another 15 minutes, that's probably fine.

DR. BOUTRON: Yeah, Isabelle Boutron from Paris.

I just wanted to add one other issue that might be also very important is the choice of the comparator, because according to the comparator you're going to use in clinical trial or in observational studies, your results might be really different. So I think we probably need to think of recommendation of how we should choose the comparator.

MS. RAYNER: Let's start down here.

DR. HARGREAVES: Just a few -- several points really. I'm sort of looking at it from several different hats here because obviously I'm here representing industry, but I also do some work for NICE and I'm also from a nursing background, so I'm sort of seeing it from all aspects.

One of the issues that I tend to have with clinical trials and producing evidence for publics designed with an incremental improvement is that the outcomes defined that we are told that we have to look at such as say recurrence rates is not actually what the product's been designed to do. You know, the product's been designed to say improve handling for a surgeon. It's not necessarily going to have an impact on recurrence rates, which were already very, very low. But then you need to start looking at things like the cost of producing the evidence to show that the recurrence rate is exactly the same as it was before, but the handling slightly better, outweigh the benefits to the patient. So when do you actually get to -- point say it's not economically viable to say that we do this amount of research and evidence development on this -- to show something that, you know, we're either not going to prove or it's, you know, it's not relevant to what the product has been manufactured to do.

DR. SUMMERSKILL: I wanted to go back to visit a point that was raised by Susanne earlier. And it illustrates a difficult choice that has to be made because you were stating earlier a concern about relying too heavily or demanding randomized data. But then you said you also have a concern about what happens with studies that are not reported. And I wanted to get my facts right before I came back to you.

But if we go back to 2004 with the International Committee of Medical Journal Editors and the call for trial registration then, the actual definition of medical interventions to require registration was any intervention used to modify a health outcome. This definition includes drugs, surgical procedures, devices, behavioral treatments, process care changes, and the like. So devices are specifically listed there as a category for which there should be registration of randomized control trial and without which it cannot be published in a peer review journal that subscribes to the ICMJE standards.

DR. LUDGATE: Sorry. Just a word. I think it refers to randomized control trials. Am I correct? The problem is that most of these trials are not randomized control. They are straight trials of just using the device in a sequential number of patients.

Now, we are working with industry to try and get them to register. Okay. It isn't something we've just left. But it is not mandatory on such trials.

DR. SUMMERSKILL: Sure. And a large proportion on clinicaltrials.gov are not registered, but this is the point precisely that if one wants to be able to assure that public involvement in a device trial that produce negative results can be searched, and if one wants to set a certain bar of evidence in regulation, then perhaps at some point of the regulatory process, such information as randomized trial data may be something to consider quite seriously.

DR. DICKERSIN: I have a question that's back to the sort of 30,000-foot view question, and it ties together some of these things. And it has to do with what's happening here. How much are you tied in decision making to the law, and how much can you decide to do and change and implement yourself? So, for example, some of the things suggested by IDEAL, what's possible within the existing law and what isn't, and that has to do with registration. For example, if 90 percent of the studies are not randomized trials that come through you, then they aren't going to probably be on clinicaltrials.gov. And so the law there is very important. And I would think it would help set a context for our discussion if, in fact, you're constrained by law quite a bit, and the law would have to change.

And the part two of that question has to do with your system for approval and how different or similar is it to say the drug regulation, drugs and biologics, or how is it independent? Do you have advisory group? What is actually the system in terms of do the companies get together with you first to design the studies and say what their plan is, and then do they go back and do it and work with you throughout the approval process? Is that the same?

I think if maybe we understand the context for the working, that it might be helpful in terms of incorporating, if possible, some of the ideas that come up here.

DR. OGDEN: Hi. Neil Ogden. I'm the Branch Chief for General Surgery Devices at the FDA.

Talk louder. Okay. Is that better?

Neil Ogden, Branch Chief, General Surgery Devices at the FDA.

You asked a number of interesting questions. And before I answer them, or try to, I'd like to throw another wrench in the works. We spend a lot of our time reviewing and making recommendations on the devices themselves that are used in surgery, and companies spend a lot of time trying to convince us that they need to go through the 510(k) pathway, the me-too, I'm just like the previous device that went through. And so we spend a lot of energy determining whether or not that's a reasonable pathway for this modification, as the companies like to call it, when it may be a whole new technology we've never seen before. And that's easy; we just say no, you need a PMA, and you're going to have do a whole -- but there's a lot of attempts by companies where there are subtle changes, but they may have significant changes as far tissue effects or patient outcomes. And so the companies would prefer not to do clinical studies because they're very expensive, and so we have to make those determinations just based on bench data and a comparison to a predicate device whether or not we really have to go to a clinical trial to get that safety and efficacy information, the outcome information.

As far as constrained by the law, I think we are a regulatory science entity, and we are constrained by the law and the regulations. And so we try to put science first. That's one thing we emphasize all the time in our meetings. Where does the science lead us, and how does that fit the regulations? And that's generally how we approach things. And so hopefully that answers part of your question, and I'm not sure I have an answer to the other part of your question.

There are certainly differences between the different centers, and they were pointed out numerous times earlier today. The regulations are different. The laws are different, and so we have to abide by -- there we're working. We do have a lot of combination products, and so we do collaborate a lot with the other centers. And so we have an Office of Combination Products that determines which center is going to have the lead, and so then we have to follow their laws. Even though it may be a device combined with a drug, the device part sort of has to follow the drug law, review laws, and those types of things. So it makes it complicated and interesting.

Hopefully that answered some of your questions.

DR. McCULLOCH: Could I just ask for a wee bit further information actually? Because what you say is very interesting, but I don't think it fully answer's Kay's question. We discussed this slightly earlier. Is it the case that the new recommendations for early feasibility studies could be accommodated with an existing law, or would you have to get an act of Congress anyway if you were to change things as significantly as that?

MR. BARTH: The early feasibility, the guidance documents?

DR. McCULLOCH: Yeah. I'm referring to the document I talked about that came out the 11th of November.

MR. BARTH: Oh, right. Those have been issued, so they represent the Agency's current thinking.

DR. McCULLOCH: Yeah, but would you have to change the law to make them the Agency's standard practice?

MS. RAYNER: You're talking about the IDEAL context, correct, the early feasibility --

DR. McCULLOCH: No. I'm talking about the FDA's own documents --

MS. RAYNER: Oh, that.

DR. McCULLOCH: -- on early feasibility studies. So the -- which is a --

MS. RAYNER: The structure -- oh, go ahead.

DR. McCULLOCH: -- radical departure from what went before.

DR. DESJARDINS: So the early feasibility guidance document was written within the context of existing law. We believe that the contents of that guidance document can be implemented, as I think a companion document that went out with the guidance document was identifying a pilot program. We're going to start implementing the early feasibility studies in a select few trials.

The document was written in the context of our laws. I think if we didn't have those constraints, there's the possibility that there are other alternatives. The program that we put forward in the guidance document reflects what we think we're capable of doing under existing authority.

DR. McCULLOCH: I think you're saying yes, right?

UNIDENTIFIED SPEAKER: Yes.

DR. DESJARDINS: Yes, I am. And I'm also a lawyer, so that might explain why you guys don't understand.

DR. SEDRAKYAN: I wanted to add to this -- I mean reflect on this discussion. I mean FDA is also public health agency because potentially that -- this is not just a regulation, but it's a public health responsibility that is FDA's mission so that you potentially have powers beyond regulation. And when we're getting into this discussion of early phase registration, and let's say they file an IDE, and then it wasn't a success. They don't have to follow up. Doesn't get recorded anywhere. You don't regulate the product. But the knowledge needs to be maintained somewhere. That's a public health mission. Right. To understand that there are a lot of experiments going on worldwide. A lot of these products might be not working well, but that knowledge is lost, and they might be repeated many, many times by different companies in different countries, and that knowledge is currently lost. So there's a public health issue here of patient safety.

DR. ASHAR: Binita Ashar.

I look at IDE studies all the time, and basically we, you know, there are reporting requirements for the IDE, so annually we have to see what the follow-up was, and then the studies are formally closed. FDA does maintain a database where we can pull up all of the IDEs ever previously reviewed, disapproved, approved, understand, you know, look at all the memos to see what questions were asked. Oftentimes as we're constructing or helping sponsors construct new clinical studies, we'll take a look at those files to be able to improve upon what we previously learned.

However, you're right. This information is not publicly available.

DR. SEDRAKYAN: (Off microphone.) And it might -- information, the industry certainly can't be shared with another company for them to not do certain things in other countries before they come to United States.

DR. ASHAR: That's correct. However, from a scientific perspective, we can recommend that they, you know, perhaps have monitoring of these additional potential safety events or, you know, additional factors that we've learned from the prior study. But it's not available publicly.

DR. GROSS: I think it would be good to hear from industry on this point of view, sort of what the pros and cons are of reporting?

DR. KUNTZ: Yeah. Thanks, Tom. I've been listening intently to all the comments here, and I think that many people would be surprised that industry shares the same sentiments that all of you have. In our company, I can speak about our company, we are moving to a transparent policy. We're the first to actually make available some very so-called proprietary data to the public, and we hope to follow that with other experiences. The issues that were raised about making data at the patient level available so anybody can publish is what our goals are. The concerns about plaintiff attorneys are always there, and that's one of the reasons that there are a lot of issues regarding why industry may not do that. We're going to take -- we're going to jump in the deep end and see what happens, and if it's a good experience, we'll continue to do more and more.

We think it's a good business model to be transparent. We think it's important to let the data out. I can tell you we have a policy in our company to publish every study we do. But it is very difficult to get negative studies published in peer review journals, and I can give you a whole list of --

DR. GROVES: No, no. It's not anymore. You've got PLoS ONE. You've got BMJ Open. You've got loads and loads of Bionet Central journals. You've got tons of journals where you can publish negative results including the BMJ, The Lancet journal, NEJ, and they publish lots of studies with so-called negative outcomes as long as what you're demonstrating is evidence of absence, not absence of evidence because it was a poorly designed study that can't show you anything.

DR. KUNTZ: Okay. Thanks for that because I haven't stumbled upon those yet and we haven't -- the journals that we have submitted to, maybe they're more American-centric. But they have been very difficult to publish just as -- yeah.

DR. MARINAC-DABIC: We'll try to post them in the FDA website for free.

(Laughter.)

DR. HENEGHAN: Just to come in on that point, just to come in on that point, though, it's solvable. Blood pressure devised their journal called Blood Pressure Monitoring to publish all protocol studies so that's such a simple thing to solve in the modern world that you can -- it's about making it available.

DR. KUNTZ: Well, let me just finish with that. The final point is that at least from our perspective, we agree that we need to move from this notion of having proprietary data as not available to the public to being more publicly available so that we can compete on the technology and science side. And this is a big issue. You know, if you look at major journals, and I can't speak for BMJ obviously, but if you look at New England Journal of Medicine, for example, roughly 50 percent of the randomized studies that have been published in the last ten years have been industry sponsored. And if those actually are proprietary data that people can't get access to, that's not good. So patient-level data is something we want to get available.

There are concerns about nefarious analysis, and as you know, when you review a paper for peer review, you often want to look at the methodology people do to derive their inference and conclusions. So we know that there are lots of ways to make inference and conclusions about studies if they get access to the data. So there are concerns about how to establish the methodologies that become transparent so that the nefarious activities don't occur. And obviously that cuts both ways, and you understand that. But these are the things that we have to work through. But I think that it's desirable for everybody to become more transparent.

DR. MARINAC-DABIC: Just one question for you. It's -- for example, if you were to embark on the large evidence -- a little bit like the actual data from Medtronic as opposed to bringing to us it, in fact, in this database, and again in the context of Medical Device Epidemiology Network to which the industry will be the actual partner, do you think that it would be safe to assume that Medtronic would like to be part of this larger project to actually give us the data to be put in the model patient --

DR. KUNTZ: Yeah, I think we're definitely going in that direction. I think that the SSID tables that you spoke about are fully transparent tables, and now they're not patient-level data, but they are analysis that's done very detailed, and I agree that people don't know about those databases, and they're available online. So every study that we've done have very detailed data on every endpoint that was measured is -- are in tables and available. So that's one thing that maybe the peer review journals don't know about or other investigators.

To get what -- I think the issue you're talking about is to get the patient-level data that I can, as an investigator, download and do my own analysis on and decide whether the right statistical test was used or the right sub grouping was used or how much the Type I error was controlled, all those issues can be something I think that we'd like to make open. The concern is that the process of determining the right methodology, which right now is a complicated peer review process, is something that you have to pay attention to because there can be a lot of analyses done on uncontrolled situations which can obviously end up having negative consequences in a variety of different ways.

So I don't want to use it as a shield to not be transparent. That wouldn't be fair. We would like to move to this transparency process as soon as possible, but one thing that this group could do is establish standards for methodologies so that it would be more inviting for industry to say yes, let's go ahead and do this, and then there would be no excuses as to why you should hold data back if we can have a more transparent methodological process.

DR. CLIFFORD: I just want to build upon some of the suggestions that we've heard so far in terms of making sure that we're bringing together all the various players at the same time in order to move forward. There's a current initiative called the Green Park Collaborative that's going on involving a number of industry players, regulators, and then reimbursement or payer groups on an international scale, the intent of which is to come up with guidance, right now looking at condition-specific guidance rather than looking at specific pharmaceuticals. But we can then see if you're looking at a condition-specific interaction or indication, there's a way to incorporate some device guidance there. And the hope is that we're able to not necessarily harmonize things across the world but to kind of come out with guidance so at least people know what the playing field is looking like.

It may never get level. It will not be binding. I'm going back to an earlier comment about whether the guidance should be binding or not. But, again, it's getting people in a room, hashing out these issues and kind of coming out with something that's going to make sense for patients in terms of getting access to interventions in a timely manner while still ensuring some robust evidence base on which to build it. So right now it's largely focused on the pharmaceutical side, but I could see maybe something akin to that happening on the device side.

DR. LUDGATE: I'd just like to make a small point. It's really good to hear Medtronic's commitment to transparency. That's excellent. It's my experience, however, and we've seen already today that there's a huge number of companies that are very small, and it's the smaller companies that really do not and are not nearly so keen to put out their negative results because it has a much greater impact on them. And I just think we have to bear that in mind.

DR. HENEGHAN: I'll comment on the small companies issue because I think I see all the company issue entirely -- I've spoke to a lot of these companies actually, and one of the problems they face is that they do not have the in-house skills whatsoever. So I guess you guys are saying we want these evidence requirements, and when they turn up, they're just throwing a whole smorgasbord of stuff that's not fit for purpose. So you got to think of a system that allows these people to interact with the skills. And one of the ways you could think about that as a solution is to start to create academic private partnerships that drive industry and innovation but actually give the skills because what you're asking for is too difficult. These companies would need to employ epidemiology, so you've got eight to ten years' clinical epidemiology experience to get information to you that's fit for purpose. That's not going to happen unless you're a big, big company, and even then it's difficult. So you've got to find a pathway of combining the industry innovation with the evidence requirements and realize it's going to get worse for you if you start asking for it.

DR. RITCHEY: Thank you all very much. This has been a very productive and very interesting morning. It's time for us to break for lunch. If you are an invited speaker, then if you walk around the hallway to the left here, all the way to the last door on the right, I think it's Room 1507, that's where lunch will be. And if you are in the audience, then if you walk around to the right here, there is lunch available for purchase there as well. We'd like to be back around 1:15. Thanks.

(Whereupon, at 12:38 p.m., a lunch recess was taken.)

A F T E R N O O N S E S S I O N

(1:15 p.m.)

DR. YUSTEIN: Okay. It looks like we have a quorum, so we're going to go ahead and get started. My name is Ron Yustein. I'm the Acting Deputy Office Director in the Office of Surveillance and Biometrics. And we're going to start the afternoon session talking a little bit on the postmarket side, and we have three different speakers today from the U.S. and U.K.

And first up is Dr. Mary Beth Ritchey who is our Associate Director for Postmarket Surveillance Studies in our Division of Epidemiology where she provides oversight for medical device postmarket studies and also is heavily involved in our Sentinel Program as well as our MDEpiNet Program, which I'm sure you're all familiar with. So Mary Beth is going to talk a little bit about postmarket regulatory strategies at CDRH.

DR. RITCHEY: Thank you. So I'm going to talk a little bit about what our postmarket practices look like with the FDA, and I would like to start by talking about the importance of having the postmarket section in our research. As devices are approved or cleared, we typically have a lot of implanted devices, and the length of use of the device is not necessarily reflected in the reasonable assurance of safety and effectiveness that we see from the premarket data. And so long-term implants need a longer study.

We talked a bit today about learning curve and about how the devices are used in the real world versus the RCT of premarket. We also need information about various subgroups and how user training and location of use, surgeon preference and that type of thing affect how the device performs.

For the 510(k) Program, there are differences in what is needed both premarket and postmarket as compared with the more novel devices that are approved in the PMA Program. And Dr. Maisel spoke this morning a bit about the differences between devices and drugs and how combination products can also be affected, and we need to look at these as well in the postmarket.

There is a need to work toward reduction of adverse events and to assess rare events in the postmarket. And as we move forward, right now we have difficulty due to the lack of a unique identifier in the leveraging some of the data that's available. We also have difficulty in figuring out whether there's a device-associated event or an adverse event that would have occurred within that patient no matter what. And so the multiple modes of failure, the difficulties with combination products and adverse event capture are also of import.

So Dr. Maisel wants to show this lovely slide this morning about defibrillators and how defibrillators over the past 20 years have changed dramatically, and all of medical devices do this with one or two or a very few number of new applications to FDA but several modifications of the device. And then if the device is removed, or there's a new device implanted in a revision, it's difficult to know whether the initial device or the new device is associated with an event. And then, of course, we have incomplete documentation as we move along.

So these are our requirements in the postmarket across the board from the FDA. So there are compliance requirements as far as inspections, recalls, corrections and removals of devices from the market, and then there are more surveillance and studying type of requirements as well. There are requirements for reporting of adverse events. Class III devices that are approved may have post-approval studies, and then if a new signal is detected, then there may also be postmarket surveillance studies required.

Medical device reporting comes to the FDA; it's through a passive surveillance system of adverse events. Manufacturers may report; there are some requirements for manufacturers. User facilities such as hospitals may report, and there are also voluntary reports that are captured. This passive surveillance system captures about 100,000 reports per year, and from those reports we'll read and monitor the individual reports themselves. We ask additional questions and try to get additional information from them. We also do data mining of the reports to determine if there is a signal there if we are seeing something that's different from what we expected.

A first approval study is for Class III, the PMA devices only. It can be ordered at the time of approval under an authority to look at a continuing evaluation and reporting on both the safety and effectiveness and the liability of the device as it's intended for use.

Conversely, the Postmarket Surveillance Study Program or 522 Program can be utilized at any point for a Class II or Class III device as long as it meets one of these four criteria: that failure of the device would be reasonably likely to have a serious adverse health consequence; that the device is expected to have significant use in pediatric populations; that it's intended to be implanted in the body for more than a year; or that it's intended to be a life-supporting device used outside of the user facility or hospital.

The Postmarket Surveillance Study Program allows for additional information about safety or effectiveness to be captured for up to 36 months in a prospective study or for a longer period of use for a pediatric device or for retrospective surveillance. Noncompliance with this can lead to other regulatory action.

In addition to these mandated surveillance and study requirements, we also have ongoing research in our division. Our Epidemiology Research Program houses about 60 studies. We also have a data mining grouping of our research. We have some pilot projects that are ongoing with CMS with Medicare and Medicaid data. We're actively involved in the Sentinel Initiative and the MDEpiNet, the Medical Device Epidemiology Network. We work with several societies to utilize data that's incorporated in device registries, and we also work with groups outside the United States to garner more information from U.S. data.

So FDA is looking to leverage of the United States data for various things. We want to know more about data for first-in-human use. As this information can represent, there's premarket and postmarket before we've seen a device at the FDA. We can see more information about device utilization, safety, and effectiveness as it's seen in Europe or elsewhere, and then we can see where the differences may be in premarket and postmarket so that as we're evaluating a device at the FDA, we can leverage all of the data that's available.

We are looking to use U.S. data sources for mandated postmarket studies as well as to leverage this information for our research program. And so we're really looking toward real world, both on-label and off-label use, when we can see this information, and we'd also like to utilize this information as we're moving from postmarket to second-generation devices to decreased study size for de novo evaluation, and even to new case control studies, especially in rare instances -- or for rare diseases or rare events.

So this would allow us to inventory all of the existing data. We can leverage all of these things. We can assess benefit and risk throughout TPLC, throughout a second generation. We can work toward using everything that's available in order to make regulated studies, these mandated, postmarket studies smaller and more efficient and also elucidate the needs as we move along. This will allow us to really have true knowledge management about a device and a device area throughout the total product life cycle. Here our goal is to systematically identify all of the relevant data, to use innovative analytical methods, to develop and apply and integrate all of the data that's available and eventually to have an evidence synthesis where we can use cost design information and really leverage everything that we have to know where a device stands to have full evidence about it.

So looking forward, we're hopeful, as Dr. Maisel said this morning, that the unique device identification will help us to better utilize all of the evidence that's available, especially when we pair it with advancements and methods and integrate it into the regulatory framework that we have. Thank you.

(Applause.)

DR. YUSTEIN: If it's okay with everybody, we'll hold the questions until after our three speakers are done so that we make sure we have time for that.

So we're lucky enough now to have a representative from one of our sister agencies. Dr. Jyme Schafer is the Director of the Division of Medical and Surgical Services in the Coverage and Analysis Group at the Center for Medicare and Medicaid Services, and she helps with national coverage decisions, and she'll be talking to us about some issues that they face with medical devices.

Dr. Schafer.

DR. SCHAFER: All right. And it's no accident that I followed Mary Beth because currently we're working together on a project.

So Medicare, let me just start that there is some confusion I know, not only within this country, but certainly we have international people here. For 45 years we've provided healthcare coverage for the elderly and the disabled. Now, the elderly, we define it, they're 65 and older. Disabled, those are permanently disabled. Currently we have about 49 million people that we cover. It's increasing about three percent on average a year. 2011, we're expected to spend about $550 billion, in excess of that, and that is growing.

Okay. The Medicare Program, where defined in statute. This first slide typically just to give people an idea of what the Medicare Program is like. It's very confusing. I'll draw you to a line here. It's involving the finance and that Medicare and Medicaid are among the most completely impenetrable text within human experience. Indeed, one approaches them with the level of specificity herein demanded with dread, and I emphasize dread. Not only are we dense reading of the most torturous kind, which I can fully agree with, but Congress also revisits the area frequently so our regulations are constantly changing -- becoming -- the process making any solid grasp of the matters addressed merely a passing phase. Very true.

(Laughter and applause.)

DR. SCHAFER: I'm sorry. Please don't quote me in the Gray Sheet as saying that. But on the other hand, I really don't care.

So Social Security Act, in 1965, this came about notwithstanding any other provision of this title, so what I'm going to draw your attention to is where does coverage come from in the law? It talks about reasonable and necessary for the diagnosis or treatment of illness or injury or to improve the functioning of a malformed body member. So we talk about reasonable and necessary all the time. What's the definition? Well, Congress has not defined it in statute. They've tried to define it in the past. They haven't been successful. They've attempted rulemaking again. There's been no traction. So here at CMS this is what we talk about, adequate evidence to conclude that the item of service improves clinically meaningful health outcomes for the Medicare population. That's what it's all about.

I work on national coverage decisions, so there's two types of national coverage decisions. One, we can get them from the outside, currently a device or service is not covered; or we also have something called local coverage determinations. Number-wise there are a lot more local coverage determinations than national, so there can be a lot of variation throughout the 50 states. If somebody doesn't like it, they'll come to us and ask for national coverage determination. We can also generate a decision internally if there's extensive literature or there's a new study out, there is an advance or there are a number of concerns about inappropriate use.

Our process is defined in statute. It can take up to nine months and 12 months sometimes -- so how do you get therapeutic coverage? You provide adequate evidence that the treatment strategy using the new therapeutic technology compared to alternatives, it works, leads to improved, clinically meaningful health outcomes in the relevant beneficiaries. Diagnostic, something similar.

What health outcomes of interest do we look at? So the more persuasive or what we have more confidence in of heart outcomes, longer life, improved function, participation. You can see the list. At the bottom there, reduced need for burdensome tests and treatments.

What are those outcomes that are less persuasive to us? Well, the life with declining function; surrogate test result is better. Image looks better; that's not very convincing. Doctor feels confident; that's not very convincing to us.

So what type of a decision can you get when you ask us for a decision? Well, you can get what you asked for, complete coverage. You can get the opposite of what you asked for, which is complete noncoverage. You can get coverage with conditions. We have a few decisions out there that are currently like that. Or you can get no coverage decision.

So I'm really here today to talk about coverage with evidence development. What's the purpose of coverage with evidence development? It's to improve the current medical evidence. So what can it do? It can document the appropriateness of something that's out there. It can produce evidence so we can look forward and that can produce a future change in the coverage, expansion, contraction, whatever. So it definitely will improve the evidence base. Most importantly, it can inform medical decision making for providers and patients.

So how do you accomplish CED? Because remember Medicare is a statutorily driven agency. Two ways I've quoted the statute there. Here's a couple of examples of coverage with evidence development if anybody is interested in looking at them. We didn't prior to 2005, implantable cardioverter defibrillators, we're collecting evidence on that. They have to be enrolled in the ACC registry for coverage. Another one, warfarin response. There you have to be enrolled in a randomized control trial. The one we just did last summer -- I'm sorry. Time goes fast for me. The summer of 2010, stem cell transplant. Again, they've got to be enrolled in a prospective study for coverage. We collect the data.

This outlines coverage process for CED. So currently, interestingly enough on our website, we're soliciting public comment on coverage with evidence development. So we admit while we have produced some gains in innovation, our experience over the last few years indicates that we've got to move forward a little bit. So we're going to weigh public input on CED with the internal lessons that we have learned to develop a guidance document that can better align CED with the rapidly evolving changes in the healthcare system, and believe me they're rapidly evolving.

So what's the goal of all this? Why do we have this up on our website? Well, here's the bottom line. It's to improve health outcomes for Medicare beneficiaries. That's the bottom line. Public comment period is open until January the 6th. Please comment. That's the website. Again, I've included examples of coverage with evidence development, three examples. Those are websites.

Thank you.

(Applause.)

DR. YUSTEIN: Thank you, Dr. Schafer. And our third speaker is from across the pond in U.K. This is Dr. Bruce Campbell who is the chair of the Interventional Procedures and Medical Technologies Advisory Committee at the National Institute for Health and Clinical Excellence in the U.K.

DR. BRUCE CAMPBELL: Is this one working? Is that working? Great, thank you. No, they said they wanted (off microphone) actually forget that I'm able to see them.

Thank you for inviting me to talk for the NICE -- I discovered to my horror over the weekend I was billed on this printed program to talk about the requirements of post-approval studies in the EU. If you want to know anything about those, ask Susanne Ludgate. What I'm going to do is just to give you some glimpses of the way that NICE deals with devices in its recommendations, specifically in terms of recommendations it makes which involve evidence development.

Now, just as background, NICE, the National Institute for Health and Clinical Excellence, was established in 1999, and its fundamental aim is to produce evidence-based -- really to try and get rid of what we would call post-prescribed -- United States -- across the U.K. These are the broad principles by which it produces all its guidance. As you can see, evaluation based and all this kind of evidence and stakeholder input all done by independent advisory committees, two of which I chair and explicit and very transparent processes, a period of public consultation which is taken very seriously. All public consultation and comments are looked at by the committee -- make changes and obviously the opportunity for --

Now, one of the difficulties -- people in the U.K. is that NICE uses -- increasing numbers of the kinds of pieces of guidance, and people get confused between them. And these are the main guidance producing programs. Each kind of device is a little bit different. Devices might be involved in any of them except probably public health, the best known, the technology appraisals, clinical guidelines I'm not going to mention again. These are management guidelines for particular conditions. I'm going to talk about intellectual procedures, that is procedures but they often involve devices. Public health I'm not going to mention again, and I will talk to you a bit about a relatively new medical technologies program. There is also a very recent program for diagnostics that is particularly for costly or complex diagnostics which also -- as well.

Now, I mentioned this in a comment this morning, the difference between procedures and devices. Of course, procedure is not regulated, but the evidence generation business for procedures and devices are similar. I appreciate today we're focusing on the devices.

Now, again, a very important thing to understand about the way things work in the U.K. is that NICE has nothing to do with the regulatory pathway. The regulatory pathway is through CE marking -- Susanne Ludgate can answer any questions about that. NICE -- if you like at the sides which are technology appraisals if that happens to be a very high impact device which is going to affect the Health Service in terms of cost in a very major way. New -- procedures, which I'll talk about in just a moment, which if a device involves a procedure that it is mandatory that any clinician doing a new procedure they've not done before as a fully trained clinician, they've not done before, they need to look it up on the NICE website, and if it's not there, they should notify NICE. And we find out about the unconventional procedures either through clinicians doing that as a matter of mandate or often from manufacturers coming and saying here we are, this is a new procedure using my device, and in fact, anyone can notify. So you spread the net wide, and you write the professional societies every year -- and medical technology -- So those are the three programs I'm going to talk about through the next ten minutes.

Now, I'm just going to put up one slide about technology appraisals because this tends to be the NICE program that everybody has heard about. This is the one we hear about in the newspapers and on television when NICE says no. Actually NICE says yes about 95 percent of the time. But when it says no, this is not clinically or cost effective, then often very active patient groups make a large amount of noise. But, in fact, this is mostly -- Health Service where they demand -- say this is clinically and cost effective in such and such a group of patients. You must provide it. And this is the -- the program uses cost per QALY. It will -- patient groups -- talking about earlier.

Now, if -- because that is a really costly thing for -- But there are devices, you can see some here, that say two things about coverage with evidence development. Number one is it's soft in its guidance. Well, it will enunciate at the end of the -- well, here's some other things we need to know. We'll make research recommendations. How often they get taken on is another matter. It may sometimes specify that things should only be done in research. One example -- cost effective -- but only in research for rupture, and that was a specific mandate to the -- technology and process program.

Now, with regard to interventional procedures, as I've already said to you -- identify -- for the first time or she looked at older procedures. One horrified me that we would refer to -- for breastfeeding. Would you believe it? There was a war out there between all sorts of different people who had different views. Very important, and this I -- perhaps not helpful here today, but I'm going to say it because it's very important for the topic on -- Even if we get notified that this is our device -- it may be the only one, we want the procedure -- and we will go to specialists and say what is this procedure called because it may be -- next week and another -- And so the main -- procedure is always generic with value in -- such as it is are all devices, and all the devices are covered by the same piece of guidance. That is pragmatic. You can argue endlessly about whether it's the right thing to do. It's what we do.

Now, the -- one of the problems, what do you do when the evidence is inadequate? Well, the first thing is we can say do not use because you've got 415 pieces of guidance now. Only twice have we ever said do not use because you actually need evidence and that does work or that it does -- And -- trying to get to the point people say oh, I must start using this. I'll tell NICE about it. Clinicians are not likely to use procedures that kill a patient or -- and so as a rule procedures have themselves -- before they -- will say only in research -- a great idea, but there are lots of problems with it, and I'll just mention those again in a minute or two. We seldom do that. We will tend to do it only when we've got very serious concerns and/or whether there is available research all set up that we will recommend that a patient be submitted -- new research that we can delay things and actually stop a potentially beneficial procedure.

This is -- 2001. I mean I sat with blank computer screens, and what do you do when there's -- evidence? It's not so bad you say research everything. But it's not -- to say use normally. And it seemed to me within ten years ago I remember, number one, tell your hospital -- don't do it -- you don't need the procedure. Get on with it. For goodness sake tell your hospital I'm going to do this -- Number two, tell your patients this is new. We've heard it from the U.S.A. I heard -- we're not sure what the long-term results are. That seems absolutely deceitful. And the third thing you can do, you -- your results. And so -- we've used guidance which was -- cautious guidance or special arrangements because in NICE speak that comes out in special arrangements -- governments, possible consent from the patient for -- research. And so the reverse obviously is when the evidence is good enough, we say normal arrangements used in the normal way --

Our guidance for most of the -- recommendations when appropriate about patient selection. A lot of people will say it should be done by -- routines, about facilities requirements, about training and expertise. Back to the --

Well, the two things for today were that often we will specify that we would like further research. And we used to just say further research would be useful. But since that -- on the table, I will bang the table; this is one of the things I do; and say what is it you want to know? Precisely those outcomes that we were talking about earlier. What do you not know today that you would like to know in three years' time, and we will specify what those particular outcomes are because otherwise -- is just tell people to do some more research.

And then, I touched on this already, submission of registers. And so I mean a lot of this has been said already today, but our aspiration of these things is we -- in Health Service which is huge, not like the U.S.A., but the population size that we have, all procedures with inadequate evidence, we ought to -- gather data on them. We need a small, relevant dataset to -- to the dreadfully exclusive uncertainties, probably time limited. There may be times when we do want very long-term data collection, but sometimes we only need a limited time period to address the specific questions. And obviously timely analysis, and I have this concept that a common, simple template oughtn't to cause an increase in difficulty and cost. But this is a rather elusive goal to continue to pursue.

Now, why do we pursue it? Well, our legitimacy at NICE is because we need the data to -- our review of guidance. The routinely collected data of the Health Service is not very good. Many of you will say you just haven't got codes. You can't see exactly what the new procedure is in the coding system, and we're working hard at that. And with the routinely collected data, safety and efficacy is negligible.

Now, of course, many of the things you talked about today in terms of electronic data and bar coding, that would be great. There are many other ways in which we can improve this data, data linkages we're working over time. And one thing I should just say with a slight smile is you'll hear that some of the U.K. data is absolutely super in primary care. And why? I can say this -- here. Because they pay primary care physicians -- half their salary depends on putting data items in. In secondary care, that just doesn't happen. It's just extra work for secondary care physicians which is why there is -- it's such inertia about collecting data.

I mean I initially thought we were quite careful about recommending research. It can be problematic. First of all, what hasn't actually been said is that the uncertainties when you do the kind of -- we do, we think we've asked questions, but they may not actually be adequate research questions, and formulating a good research question is difficult -- out any research -- takes ages with the bureaucracy that you all know well and -- we say only in research that may stop the use of potentially beneficial procedures and use of devices, and that's why you need to be cautious about them --

We continue to do these things. We use existing registers when we can. We adapt some existing registers. We're seeking new ways of creating simple ones. We're trying to improve coding. I've mentioned already, and there's been a quite a conversation, about manufacturers' postmarket surveillance data, which I feel could be made so much better by having it independently supervised, by having it -- by trying to get full coverage so we know about every device. And the international agenda has also been mentioned by others.

Here are some examples of procedures which we have had register data which has helped us as an adjunct to the published -- in recent times. You will see, I won't run through them all here, but the three above are established registers, and others will be developed specifically. One has already been mentioned today because we -- tried to -- replacement. That's one that we've actually had set up, a specific part of the National Cardiac Register.

My last few slides are about the new NICE program for medical technology set up -- and the aim of this was to try to identify new specific products identified by manufacturers which are the kind of products that actually offer some advantage over what we do at the moment and try to begin -- to adopt them more quickly. So -- say why on earth didn't we -- and take this on sooner. That's the -- and these are devices and diagnostic -- by manufacturers. The committee, the Medical Technology Advisory Committee, we look at them and say does this look as if it's something we should select for evaluation on the basis that it gives advantages -- current management. That's -- said. Said what's current management? That can be really -- for your comparators. And so we need to find that with health specialists, either advantage in terms of patient -- use of resources, cost variable, not a new, more expensive thing. And there's a salability agenda.

Our recommendations to the Health Service, we hope, will say the evidence supports the case for adopting this in such-and-such group of patients because it will give this advantage, parallel to if you do this, it will give you a savings of 450 pounds per patient over the next year.

We've built into this for the first time ever some funding from NICE to set up research. So -- specific particularly -- utility questions related, we can actually arrange to have those researched and just by -- for today, the top one's the diagnostic. Today here's the example. This is a thing called -- earlier. This is a way of shining -- through --on this, and there is some evidence this can help chronic wounds like -- ulcers to heal more quickly. And there's enough evidence -- that this a fine, promising thing to do, but we're actually, as it were -- commissioned research, some of the basic -- concept about which is in doubt, but also some more -- evidence. And the way that we're doing this, NICE at the top saying we need more evidence. Moving down to that -- we have contractors -- centers, Richard Lilford being involved in one of them. And between United States -- centers -- the manufacturer, we will help them by designing research questions, protocols, dealing with ethics, dealing with everything that's needed until -- money or with even them having to do with research grants, NICE will help them with that in order to assist manufacturers towards better evidence. It has to be done on a limited timescale then using the outputs to guide our future guidance.

So the aim of this research arm of the Medical Technologies Program is any -- all this includes the adoption of new technologies but to improve research into devices and diagnostics specifically by demanding good evidence for our evaluation -- no good evidence but advising -- adequate and help with the cost of research on what would be the -- the gist of -- thank you.

(Applause.)

DR. YUSTEIN: Thank you, Dr. Campbell. I think we're actually on schedule, so maybe since we started a little late we can take a few minutes for questions. So if anybody has any questions for one of our three speakers, Dr. Campbell, Dr. Ritchey, or Dr. Schafer, if you have a question, feel free to raise your hand. Identify yourself for the transcriptionist.

Yes.

DR. LILFORD: Richard Lilford.

Dr. Schafer, I really enjoyed your talk, and I must congratulate the Medicare on getting such an excellent process for translation of research into practice. When I was on the R&D Committee of NICE, we discussed this many times, and as you heard from Bruce, it's quite easy to say only in research, but what we've found is anyway, as Bruce has just said, much more difficult to actually come up with the process by which there is a research program going on.

Can you tell us something about how you translate your paper in research into reality and include in your answer the issue of the patient who wants a new treatment and doesn't want -- she wants a 100 percent chance of getting it, not a 50 percent chance?

DR. SCHAFER: Good question. So as I said before, healthcare is rapidly evolving, and I think what we're trying to do now is in those circumstances try to look forward and have an idea of where that clinical trial is going to come from. NIH I note is not here today. ARC, other people, but the money for clinical trials, I think that's what you're trying to get at?

DR. LILFORD: (Off microphone.)

DR. SCHAFER: It's tough. We can provide -- like in IDE studies we provide routine costs. We don't pay for the advice. If we cover the clinical trials, sometimes we can. The other thing I want to mention is we don't provide administrative costs for registries. We can't pay for that, so that funding has to come from somewhere else. So yes, funding continues to be an issue.

DR. REDBERG: Certainly if it's not available outside of the trial, there isn't a choice of getting it 100 percent of the time. And I think that's the most successful model. I mean certainly like PFO occluders, when they were available outside the trial, I mean when we've -- speaking as when we've tried to do clinical trials, doctors will tell us why should (off microphone) -- procedure when we can get paid all the (off microphone) I think most successful if there is (off microphone) you know, we're collecting new data. We don't know, and so if you want to get that procedure, it's just within -- I believe that was what was done in TAVI. You know, you had to be in the trial in order to do it.

DR. LILFORD: And there are quite a few examples of that. The ECMO in my country, artificial lung, that was only available to people who were prepared to go into the trial on a 50 percent chance they'd get it.

DR. REDBERG: The lung volume reduction surgery here I think was --

DR. DAHM: Also a question for Dr. Schafer. How -- when you determined that there would be coverage with evidence development, how do you determine -- who makes the decision what kind of research will be done? I -- you showed us examples where the research will be a randomized control trials, but there also are many examples where these are observational study designs. Understand that PET imaging is currently funded through such a model, and what has been your experience with the evidence coming out of these observational studies? Will you be able to determine whether PET is really efficacious in all these indications based on the information you're gathering?

DR. SCHAFER: So if I hear you, the first part of your question, I'm not going to give you another easy answer because each device, each service is different. They come with vastly -- the outcome are so different between objectives, objective outcomes, the weather of evidence coming in. So it -- all I can say is it depends, it depends on a lot of things.

Is there a second part to your question?

DR. DAHM: Well, I don't know whether you can address this specific example, but PET imaging, I know -- I think there have been concern raised whether -- about the type of evidence that you are going to generate from whether there's really any evidence that is informative about PET that will be generated from this observational data.

DR. SCHAFER: So another good point. So things like that, I'm going to answer your question indirectly, and I'm going to refer you back to our CED guidance document. And if you could express your ideas with that, that would be helpful to us.

DR. YUSTEIN: Okay. Quick questions before we move on?

DR. FEINGLASS: So this is Shami Feinglass from Zimmer.

I would echo Dr. Schafer's point from Medicare. Use that public comment period time as a way to comment to the Agency. The Agency can really only do what they put on paper or what other people suggest through public comment in very thoughtful manner. If you comment publicly, it allows them to be able to consider those things and possibly incorporate them into their regulation. And I, having worn that hat before, I would encourage everybody in the room to comment so that you can help move the direction of Medicare.

DR. HENEGHAN: Again, maybe somebody can help me. So when we look at the postmarketing surveillance study, the 522, the thing is what slightly perplexed me was the study duration is 36 months for nonpediatric studies. Yet Medicare has given us the example of requiring an ongoing registry with ICDs because your postmarketing should match the thought of how much harm if the product can go wrong. So I slightly felt it was a contradiction that after 36 months actually some devices may be so serious to actually make a mandatory ongoing requirement. And that's particularly important if you make incremental changes to the device. So I would want you to explore this 36-months idea again and say actually maybe it's more likely to be variable.

DR. RITCHEY: So that 36-month requirement for the 522 study is -- the 522 studies are ordered not for every device. And so it's just when some sort of safety or effectiveness signal has arisen. And so the thinking is within 36 months we should have additional data that will help us to determine what needs to happen next.

I'd also like to say that that is for a prospective study only. We can look at retrospective data, so if the device has been on the market for ten years, we can look at the past ten years' worth of data in addition to three prospective years.

DR. YUSTEIN: Danica, did you want -- did you have something to say?

DR. DAHM: I got a question for Dr. Campbell. You mentioned the special arrangement that NICE makes. What -- how do you monitor that these special arrangements are actually met, and what happens if they don't happen?

DR. BRUCE CAMPBELL: I can't give you detailed feedback on that. But what I would say to you is they're part of the core requirements of our Care Quality Commission when it inspects hospitals. The Care Quality Commission can descend almost unannounced on any hospital now and has swinging reprisals that it can take if things are not as they should be. And following interventional procedures, NICE guidance is one of their core requirements that they can check on. And that's one of the reasons I mean I've injected into them one or two things that you can check on. Like for example, if we make a recommendation for consent, then we've got to go make some special arrangements regarding consent. That includes written information for patients. So, you know, people have to have, and you can check on it, written information for patients about the novelty of the procedure. Things like that that can be audited.

DR. YUSTEIN: Okay. Why don't we go ahead and move on to the next section, and I'll just introduce our facilitators for the next think tank, and those are Trish Groves, Rita Redberg, and Phil Desjardins.

(Asides.)

DR. REDBERG: I have the microphone, so I'll start. And then we can -- but I wanted to start because the title of the think tank was on conditional approval, and where we could go was to use as a springboard the talks we just had and talk about what features we would be hopeful to see in CED and what kind of devices and procedures we'd like to see CED going forward. And from what I heard Bruce say, he said key features were that it should be independently supervised, the data should be transparent, and then I guess I added news data because I always think of the ICED experience, which I think is a great start for coverage with evidence development.

I was on the Medicare Coverage Advisory Committee in 2003 when we reviewed the expansion for ICD which was brought forth by Guidant which I guess is -- anyway. And I think the data helpful but I -- from what I learned from that experience, what you'd want to see in registries was outcomes because that was just an in-hospital database. There was absolutely no long-term outcomes, and so you really need to know what happens and so a richer database. And also that Medicare actually should be able to use the data to then go back and say are we covering the right groups; are we covering the groups that we intended to; are these people benefitting in the way we intended; are there more people that would be benefitting dependent on the data, or are there people that are getting defibrillators that are not benefitting and then actually readjust coverage because, as we know, defibrillators are lifesaving for some patients, but there are I would say a significant number of patients getting them for who they're not lifesaving currently. And we could be learning a lot more, I think, from our registries.

So I thought we could start off and have some discussion around experience and where we should go, what we've learned from what we've done.

DR. DAHM: May I ask a very basic question? We seem to be using the terms "conditional approval" and "conditional coverage" with evidence development in a synonymous -- we -- slipping in and out of the -- these two terms. Can we use them -- they don't necessarily mean the same thing, do they? I mean you could -- I mean the coverage as a Medicare decision to pay for the service, which doesn't necessarily imply that other payers could not use it, whereas conditional approval by the FDA would be a much more fundamental thing.

DR. REDBERG: I think that's an important -- I don't know if anyone from CMS or FDA wants to comment.

DR. MARINAC-DABIC: I just wanted to clarify, does everybody understand what the actual condition of approval in the FDA language means? Maybe I could just clarify that even if device is approved with condition, it's still approved for marketing. So anybody can use it. There are no restrictions. All the company has to do is to comply with the conditions imposed at the time of the approval which are typically -- typical to studies. One, it's continuation of the follow-up of the premarket trial and then also a new study of newly enrolled patients to actually address more of a real-world type of utilization, learning curve issues depending on the device. But you would typically ask for more than one study at the time of the approval.

So that's even I think a third category, Phil, of what you're referring to there. Maybe there is some misunderstanding what really real conditional approval is versus conditional approval versus coverage with evidence development or national coverage.

DR. BRUCE CAMPBELL: Might I just comment as one -- adding to that? After the talk that I've just given, of course, what one complete difference you have to realize that is in the U.S.A. when you're talking about the FDA and conditional coverage, conditional approval, whatever it is, your relationship is entirely with the manufacturer. Medicare's relationship is with the payer, and our relationship at NICE is with the NHS and clinicians to whom our -- so actually we're in fact all giving guidance to slightly different groups of people which in a sense stacks up the confusion.

DR. GROVES: Yeah, well, I have to keep going on about this. I'm sorry. But we have a lot of people in the room who are not regulators, doctors, researchers, academics, from Medicare. We have patients and members of the public, and we're talking about devices and procedures being used where there isn't a lot of evidence. So does anybody in the room who has not spoken yet who is either a patient or a member of the public have anything they want to say or ask in this session? Because this is the nitty-gritty stuff. This is where we don't really have much evidence. So what do you guys want out of the people in this room? Is there anybody who's itching to speak who hasn't had a chance? We've got an hour.

(No response.)

DR. GROVES: No? We've lost Marlena, actually, haven't we, who's on the panel. But anybody else here who -- you've all been here all day.

(No response.)

DR. GROVES: Okay. I just wanted to check because it seemed like your chance in the program. All right. Fine.

Just the other thing I wanted to flag up was that Bruce was asked to speak about how all this works in the European Union, and quite rightly he didn't speak about that. But I would love it, Susanne, if you could just give us five minutes, just quickly explaining what happens in the European Union because it's 27 countries. It's quite a lot of people.

DR. LUDGATE: Thank you. Was once asked to do it in two minutes, so I'll try hard. If I can -- right. I'll start at the beginning. In the European system the manufacturer has to demonstrate that his device complies with the relevant essential requirements that cover performance and safety and that his device has a positive benefit/risk analysis. That allows him to put a CE marking on his product, which then means it can be sold freely without any further constraints throughout 27 European countries.

Now, except for the very simplest devices, the tongue depressors, etc., that CE marking has to be checked by a body, an accreditation and auditing body known as a notified body, and there are 83 of these throughout the European Union, and in theory a manufacturer can go to any one of those. Now, not every notified body obviously covers every device because there are about 90,000 devices on the market, and some specialize in certain devices. And certainly there are bigger ones and smaller ones that take the lion's share of the bigger ones, but 83. So you, you know, I think you need to bear that in mind.

I think you also need to perhaps bear in mind that these are commercial organizations. In other words, their money comes from the manufacturer. Nobody else pays for them. And that you may think may enter a slight conflict of interest because it isn't in their interests to perhaps turn around and say this is rubbish, go away and do a proper clinical trial, etc.

Okay. Your notified body checks, says fine. Okay. Your data's there, CE marking. That's fine. European market. There is then an onus on the manufacturer for really two -- well, to undertake a postmarket surveillance program, and that program should reflect the classification of the device, the risk of the device. You know, if you've got something like a syringe, you don't need much more than a customer complaint postmarket surveillance program, whereas if you've got a novel device or a risky device, it should be -- you might want postmarket trials. You might want a much more detailed system. And it's really up to the notified body to check that that postmarket system that the manufacturer is proposing to put in place is proportional to the risk, seems sensible, the endpoint seems sensible, and it's also up to the notified body when they come back for -- the manufacturer comes back for CE marking that they check that that's being carried out and that the data that comes out of that is incorporated within any risk analysis.

Part of that also, the manufacturer has an onus to report any serious adverse events, and that's defined to the competent authority or the regulatory authority in which that -- in the country in which that event happens. In the U.K. we have also, like the FDA, got a user reporting system which has been extremely valuable. I have to say that an awful lot of the problems we identify come through that. We don't get as many as you. We get about 12,000 adverse events a year reported to us. And then it is up to us as the regulatory body to take action on that. We will investigate these. We will take action, which might mean a variety of things. It might mean taking the device off the market. It might mean working with a manufacturer for some modification. It might mean issuing advice to the Health Service.

But that's essentially what we have in place, and then we have a compliance unit that goes out and will investigate, make sure various manufacturers are doing what they say they're doing. If we have any concerns about a manufacturing site, they will go out and audit that.

So that in a nutshell is the European system.

DR. GROVES: Just to clarify then that manufacturers could go to one country in Europe, so they could go to Romania or Poland or wherever, and get the CE mark, which is the same mark that you would get on a hairdryer or a toaster --

DR. LUDGATE: Well, it's not the same. I mean it looks the same but --

DR. GROVES: Yes, but it's the same --

DR. LUDGATE: -- there are certain requirements that it has to cover I trust will be different. Okay.

DR. GROVES: Well, okay. But it could be, but it could be a small case series in one center.

DR. LUDGATE: Well, in looking at, the notified body in assessing the data submitted by the manufacturer has to look at the clinical data. Now, for every -- you don't need clinical data for every device. Okay. You don't, you know, a syringe is not going to need clinical data. Probably not. But if there is clinical -- for the more risky devices, what we call the IIb and Class III devices, there must always be clinical data. Now, we have the same problem, I guess, as the FDA here because that clinical data can come from specifically designed clinical trials, in which case as the regulatory body we have to approve, and indeed we turned down quite a few. Or it comes by equivalence with a similar device, and that's where the problems start because trials are expensive. Manufacturers want to claim equivalence, and they are not always equivalent in a lot of series and functions. And I have a lot of problems with that. I think it's not done well by a lot of notified bodies, and I think it's not challenged enough by notified bodies saying look, you know, this may be a knee implant, but actually it's really quite different. Or you know, your clinical -- you really need to go and do a clinical trial here. So I don't think there's enough challenging goes on about clinical data.

DR. GROVES: And, again, the bodies that are giving the approval, this kite mark if you like, are funded by the manufacturers, solely funded by the manufacturers.

DR. LUDGATE: Yes. Okay.

DR. GROVES: So we heard about the FDA devices, epidemiology program, and the fact that the FDA might look to countries where a device had already been approved effectively for the data. But it sounds like the data if they're coming from Europe may not always been worth having. I don't know.

Did you want to say something, Art?

DR. SEDRAKYAN: Just to clarify then, coverage with evidence development or conditional approval will be notified body's responsibility in Europe?

DR. LUDGATE: Approval that the CE marking is carried on the devices is the responsibility of the manufacturer, okay, but is checked by a notified body.

DR. SEDRAKYAN: By a notified body.

DR. LUDGATE: Yeah.

DR. SEDRAKYAN: So it won't be MHRA. It would be notified body that would impose conditional approval.

DR. LUDGATE: Unlike drugs, we do not centrally license devices.

DR. SEDRAKYAN: Um-hum.

DR. LUDGATE: Okay.

DR. DESJARDINS: Is that truly a conditional approval scenario or is it -- so I think in the FDA sense we treat sort of the postapproval studies, so the conditional approvals that we're talking about, is there is some subset of unanswered questions, and we identify, hear the questions that we would like to be answered once the product reaches the market. Is there a similar scenario in the EU context?

DR. LUDGATE: No. You either get the CE mark or you don't. Now, you know, they may recommend or demand that your postmarket surveillance program incorporates certain questions. But that's not conditional, okay. You either get it or you don't.

DR. DESJARDINS: And I think we face some of the same limitations. One of the, one of the things we struggle with with our conditional approval authority is that we can only order the questions that we're aware of at the time of approval. If new issues pop up subsequent to approval, we're stuck within our 522 authority, and it's a little bit more limiting. And I think in those scenarios, then we sort of -- it forces the question of now are we still talking about devices or procedures? Because I think the procedure question of how is this device being used, and how is it being used in the real world becomes the new question, and I think we're looking to CMS more and more to help answer that question.

DR. REDBERG: -- it is --

DR. LUDGATE: Can I just make a point that --

DR. REDBERG: Oh, I just --

DR. LUDGATE: -- occasionally if there is some safety issue that we recognize in the postmarket phase, we will tell the manufacturer look, you go away, and you do a proper postmarket surveillance program, okay, which we will carefully monitor, all right. And we have done that in two devices I've seen recently.

UNIDENTIFIED SPEAKER: (Off microphone.)

DR. LUDGATE: Yeah -- off the market.

DR. REDBERG: You've taken it off the market because that's what -- how often has FDA ever taken a device off the market because of a problem with a post-approval study?

DR. DESJARDINS: I'll defer to Mary Beth, but I believe the answer is zero. Recently we've been a little bit more transparent in how companies are meeting the post-approval requirements, but I think the answer is we've not taken anything off the market.

DR. REDBERG: So I just wanted to get back maybe to the way we could have FDA and CMS work together, linking conditional approval with coverage with evidence development because if coverage was linked to the additional data, I think we would get very robust data and very robust answers, and there would be real consequences. Certainly having a post-approval study, but having the device out on the market, you know, as a physician certainly we know once the device is out on the market, and people start doing it, people invest in a lot of equipment to start doing it, hospitals start marketing it, you know, a year later if you find out it's not as great as it thought you were [sic], there's not a lot of incentive. It's very hard to stop the train once it starts speeding away from the station. However, if there was actual teeth in terms of FDA approval being contingent on good data a year later and coverage being contingent on good data and that being known at the time of approval, it would be a very different story. Otherwise, I think that's part of the problem and certainly be open to other questions.

But I think going forward we'd want to have, you know, FDA and CMS coordinated in conditional approval and in coverage and to have real consequences to not -- either not fulfilling the postmarketing surveillance or not supplying the data, which I believe happens, or the data not turning out to be what we thought it would be. Of course, I mean there will be times maybe it would be better, but I think we have to be prepared that there could be rescinding of approval and coverage.

DR. BRUCE CAMPBELL: Can I just ask a factual question about that? If you produce coverage with evidence development, obviously that's an onus on the manufacturer. But the manufacturer surely must depend on the clinicians to get the data they need. How does that work?

DR. REDBERG: As I think Rick said, most clinical trials in the U.S. are industry sponsored. I mean there's very close relationships between industry and clinicians to generate most of our clinical data.

And, Peter, did you want to --

DR. McCULLOCH: Yeah. I just wanted to say how much I agreed with what you said. You know, what I was saying when I said I thought I perceived a bigger role for coverage evidence in the future was that, you know, the FDA were going to start, if you like, thinking outside the box in terms of how they could approach a new post 510(k) scenario. And the sort of use of coverage of evidence that you're describing matches very much with that vision. So what I'd like is for maybe Mary Beth, I don't know, the FDA to respond to that and say what do you think of that idea.

DR. MARINAC-DABIC: Well, a couple things for the context. We currently have over 150 mandated post-approval studies for the product that -- products that have been approved, and they are still ongoing. Eighty-five percent of those studies are progressing well, and what I mean by that that, according to the timelines that have been agreed upon with industry at the time of the approval, we very rigorously monitor the progress of the studies. Every six months we receive the report from the sponsors, and we certainly review it for real-time signals. If there's a problem, we don't need to actually pull the product from the market based on the results or inefficiencies in post-approval study. There are other actions that we can take.

For the ones that are not -- also, other criteria that we consider when we classify those studies being progressing adequately or not adequately are that we mandated that for our rating those post-approval studies never go below 80 percent and that any time point during the post-approval study, 90 percent of the data that needs to be collected is collected. So those are pretty rigorous criteria in terms of what we consider adequately progressing. So when I say that 85 percent of the studies are progressing well, we're actually proud of that percentage.

There are still 15 percent or so, and this number changes from month to month, and you can check on our website, sometimes goes up to 20 percent. All of these things are posted on the website. We work very closely with industry to find out what are the reasons why studies are not progressing, and sometime they were legitimate reasons. Sometimes really lack of the coverage is a reason because, you know, the companies cannot always pay for those procedures and the devices. Sometimes there are other reasons, and we sometimes modify not the requirement but modify the venue of how the study is going to be done. We augment the post-approval study regional mandate with outside of U.S. data, and we have done this recently in the orthopedic arena for ceramic-on-ceramic hips when we allowed company to use the Australian registry to supplement their original mandated post-approval study and also the use of Kaiser Permanente U.S. registry data to add to the original conditional approval.

So there are again other things that we can do. We would never sit on the signal and not to do anything. But, again, the point is very well taken that there are difficulties with the studies. Just to remind everybody that for the orthopedic devices, we ask companies to do the -- to follow studies for ten years, follow the patients for ten years, which can be quite burdensome.

In terms of the cooperative work between us and the CMS, I think more and more we are really working much more closely, and tomorrow we will be discussing in more detail very I think useful and productive collaboration that we had recently in the area of transcatheter valve that was just approved early November, and we already have the national registry in place. It will capture the post-approval study patients, and CMS had already opened the coverage decision.

So, Jyme, do you want to add to that?

DR. RITCHEY: I would add to that just a little bit actually. So the regulations are slightly different for the FDA and for CMS. And so what we're asked to do as far as our regs and what CMS is asked to do as far as their regs complement each other but aren't the same thing. And so the coverage with evidence development decision really is CMS' purview. And so while we are in a new era here, and we do have the opportunity to work together and to talk more about what's going on, the condition of approval and the determination of that is an FDA reg. The coverage with evidence development is a CMS decision.

DR. MARINAC-DABIC: And if I can just add to that. Yeah, that's very important. However, CMS' decision to actually open the coverage decision helps us tremendously in actual enrollment of our post-approval studies, especially in this particular case that I just cited. And we know that if this is open, then we're going to have much better participation in the registry when we would -- as opposed to the traditional post-approval study.

DR. GROVES: (Off microphone) the order, if it's the relevant was --

DR. REDBERG: -- wanted to comment --

DR. GROVES: Oh, okay. Sorry. Did you want to come back?

UNIDENTIFIED SPEAKER: (Off microphone.)

DR. GROVES: Okay. It's just that they've been waiting for ages. I don't know if it was relevant to this conversation.

UNIDENTIFIED SPEAKER: (Off microphone.)

DR. GROVES: Are you in on this or something else? It was Art, Carl, Richard all were trying to say something about ten minutes ago. Okay. So --

UNIDENTIFIED SPEAKER: (Off microphone.)

DR. SCHAFER: I just want to throw in here that just because the FDA approves it, we don't have to pay for it. That's pretty important I think because ultimately what we're all getting to is industry wants to be paid for it. I mean I certainly understand that. That's why I'm interested in what you have to say, Richard, and you also, Shami.

So the other thing is that once they approve it, we haven't even talked about whether the -- anything like that. I mean we could -- once they approve it, I mean that is -- that's the first hurdle in this country, to get paid for, but that's not the only hurdle, so there's a lot more to it.

DR. REDBERG: Would you comment on how often FDA approves something but Medicare doesn't pay for it? Because I sit on California's Technology Assessment Forum, which is advisory to BlueShield of California, but the most frequent thing we hear is FDA approved it, you must pay for it. And, you know, the assumption is even though the evidence criteria are different for the California Technology Assessment Forum than they offer FDA, there's a lot more clinical outcomes in the CTAF criteria. There is a sort of unwritten understanding that people think once FDA approves the technology, it should be covered.

DR. SCHAFER: Right. You're absolutely right, Rita. Thank you. It's untrue, right, but in practice it's not very often -- that the local contractors can noncover. We do have the initial noncoverage that I've been involved in a few, so it does happen. But you're right, by and wise, it's relatively rare.

DR. REDBERG: Right. And the last thing I'll say, particularly for 510(k) device where there isn't a criteria for safety and effectiveness, and we're talking about substantial equivalence without clinical data, I mean certainly that is not the same as reasonable and necessary for Medicare, or for most private insurers want clinical outcome data. But there is then that tension between having FDA approval for a device and not having clinical data; that happens commonly.

DR. SCHAFER: 510(k) is seamless to us, but I also want to throw in, too, you know, I'm interested in innovation as well. So it's not just one thing. It's many things. We're interested in professional societies, industry, patients, what everybody has to say; so it's not just one thing. It's complicated.

DR. KUNTZ: Thanks. Just to add to that, one thing we probably should do is just show up on a taxonomy of what we were studying and what we mean by postmarket. So a company like ours will spend $400 million next year in clinical research, and the majority of it is in postmarket, not in the premarket. That's something that most people don't know.

So there are a couple dynamics I think that are going on here. One is should a product be approved for use? And that's generally tested by a premarket study, which has narrow indications, usually high validity, like a randomized control clinical study. And you know about all of the artificial aspects about a randomized study which improve validity but make it difficult to determine its real effective use. So we use terms like efficacy when we talk about premarket and effectiveness, when we talk about postmarket.

But legitimate aspects about postmarket studies that don't ask the question should it be withdrawn from the market is how is it being used in the real world. That looks to off-label issues. That looks to people who weren't good enough to be selected, for example, to be in the randomized studies. How does it play in the average places per se? That's really critical data to have, and we probably don't do enough of that.

And it's often difficult because we can't promote the study of these devices outside their labeled use because that would be promotional. So it's somewhat difficult sometimes to say well, how do we study use off label if, in fact, that's also tied with promotion? So that's where I think surveillance comes in nicely because I think we do have an agreement that if we do surveillance with a hands-off, we can actually get at least a sense of what's off-label, what's not off-label. If there's a substantial use off-label, it would drive us and motivate both the regulatory agency and us to say we need to do a study in that arena.

So there are a couple risks here, one of which should a market -- should a product go to market, and then should it be withdrawn? That's always a question to be asked. But probably the majority of our postmarket studies are where is the appropriate application of a device once it gets into the real world? How should it be refined? And the actions taken are not to pull the product from the market but to update the analysis and estimation of the endpoints so that people can be better informed.

And so these are the kind of things that we think about as manufacturers when we go forward. Of course, we want to pull a product from the market if it doesn't work like anybody else does. There's absolutely no business value for us to have a product which causes harm or doesn't do what we say it's going to do. And I think more and more manufacturers recognize that as something that they own and want to do.

But specifically I think -- and I had mentioned earlier, we have to have a way to disseminate and estimate and summarize what we're learning about devices in the postmarket so we can update those notions about what devices are, and we can use that update to say here's the ten-year data now with this degree of certainty. You know, before we only had three-year data. Now, we have five- and seven-year data, and here are the -- around those estimates. And we don't do a good job of that. We don't have a way to inform people about that per se, and that's something I think we need to work on.

DR. FEINGLASS: So Shami Feinglass.

And one thing I'd add to Richard's comments are absolutely, I mean if there is a product that's not performing, we're, as Medtronic does, looking to move that off. That's in our best interest. Many of us are physicians that work at these companies. We're not going to go out and harm patients.

The second thing is to his point about generalizability. To be clear again, manufacturers can't do a study on off-label use. I can't. I can't touch it. I can't report on it. I can't talk about it. People in the audience who are clinicians in practice can do those studies. I can't fund you. I can't report on it. So, again, there's a vacuum there for off-label use that really has to be addressed by societies, academic societies, specialty societies, but cannot be addressed by industry by law.

The second point there is in looking at how you might pass out the information on the different endpoints at different year points that Richard's talking about, one of the issues there is there's got to be people that want to publish on that. So, for example, even if industry wants to publish on that, there's sometimes, as we know well, there's bias to the information that industry is putting out. By definition I work for an orthopedic company. Everybody in this room should figure that I need to be transparent about what I'm doing with my data. So if Dr. Graves is doing a study for me, it needs to be transparent that I have paid him to do that study; he is doing that study, but yeah, there's industry money behind it. But that said, you need to understand those -- you need to understand those biases, weigh those, and then look at the evidence that's in front of you.

And I think that some of what has to happen is either different collectives coming together to look at cross-section of devices to be able to publish on how they're doing going forward, specialty societies looking at their specialty and how the devices in their specialties are performing to get that information about there. Because the postmarket surveillance is really -- for some of us that have devices that stay in you for a long time and do very well generally in you for a long time, there are other ways to get that information out. Registries help, but there's other ways of postmarket surveillance that I think will help the community, that this group can help come up with ideas about.

DR. DESJARDINS: Just wanted to add one point of clarification. I'm not disagreeing with the statement there are significant consequences to promoting a product off label. But FDA doesn't prevent people from studying them. The limitation there is if you want to study a product off label, you may need an IDE to do that. So, again, sort of -- it's actually the study of a new indication under --

DR. FEINGLASS: Yes. So, we can do it if we're going to study a new indication, absolutely. But there's another hurdle that people probably know about with that. You need to, you need to see if that's an off-label use that is useful enough to go about doing an IDE, and we can have that discussion, I'm happy, in the corner with someone. But there's another issue there yes. We can study things from new indications, absolutely.

DR. DESJARDINS: Just wanted to make sure that for some of the people who aren't -- they didn't think that FDA was saying you can't -- once it's approved, you can't study it for something that it's not approved for.

DR. REDBERG: Do you want to make a comment?

DR. GROVES: Well, yeah. I mean, well, I guess you can shout, I think, if you want to say something.

DR. HENEGHAN: Well, I'm going to shout.

DR. GROVES: Sorry.

DR. HENEGHAN: Look, we're all coming around the world to the same perspective, and I'm going to get you to focus just on implanted devices alone, and this is our perspective having looked at this area for about two years. Devices implanted into your body today will continue -- there are ones on the market that will cause harm, and there will continue to be ones in the future. Even if you put RCT data against them, they may be harmful. RCTs don't predict long-term harm. They don't predict new manufacturers coming into the market and using equivalent for that devices.

So what we should be focusing on is saying for all implantable devices, let's start to work out how we get them all in a registry on a national basis. Then we can coordinate internationally and then we need to, what you're saying is, learn more about the signals. And soon as we get our act together and go down that route, the sooner we can start to make a difference.

But at the moment, the way it's studied, we make a decision which ones to choose or not. The problem with that is we have no idea which are going to be bad ones, and Stephen can give you the best reason for why you need it. The reason is they had the best hip registry. They didn't know it was going to be a problem, but they had the best ability to detect the signals. And so fundamentally that's what we've got to do, and that's why I think we have to start getting a coordinated approach.

DR. LUDGATE: I would absolutely support that because I think registries are absolutely the way to go. You know, post -- I'm really impressed by your postmarket surveillance data because actually I don't think we could match that in the U.K. My feeling very much is that postmarket surveillance programs are very expensive, and that may be okay for big manufacturers. It can be a real problem for small manufacturers, particularly if their devices are not used in great numbers and are spread around the country.

But also, you know, clinicians get bored with supplying the data. I mean our surgeons have an interest rate of about three weeks. Patients get bored. They don't want to -- if they're well, they don't want to come back. They don't want, you know, so -- and as time goes on it, becomes increasingly difficult to interpret the signals. You get co-morbidities. So I really would support registries. I think it is the way to go.

DR. GROVES: And none of that stops people reporting adverse drug reactions. I mean you -- patients and doctors report adverse drug reactions. They don't have to prove that the drug caused the reaction. They just say I've got a hunch, and that's enough because when you have enough hunches gathered together ,you say hang on, there's something going on here. We've had so many cases now where somebody thinks this drug is related to this event. So I don't necessarily buy the fact that it's harder for devices to pick up those long-term signals.

DR. HENEGHAN: Can I come in on that?

DR. GROVES: You should have registries as well, absolutely.

DR. HENEGHAN: No, can I come in on that? On drugs, you see, you've got lots of databases and registries where you can just go straight in and say let me look at 100,000 people on this drug Demerol. And you can do that really quickly and effective.

DR. GROVES: Spontaneous reporting which has been going on for years. You know, we have a yellow card system --

DR. HENEGHAN: But we can do that easy with drugs. We can't -- you can't go in and say tomorrow 1,000 hits or X, Y, and Z. It's because it's so --

DR. GROVES: The data on there is --

DR. HENEGHAN: -- it's so all over the shell. Who's leading? Who's deciding to do it? There's no coordinated approach right now, so the manufacturers might be doing it because you're really responsible. But then again some aren't.

DR. GROVES: Which falls down to what you said over lunch, Susanne, which is that the registry -- the regulation of devices was a system certainly in Europe that was set up for manufacturers to get to market. There was no public health imperative. It's entirely manufacturer-focused process, whereas hopefully we're all here talking about a public health imperative --

DR. LUDGATE: But I think you should --

DR. GROVES: -- and how to flip that system.

DR. LUDGATE: I think, you know, it's all right saying we have drug, you know, where you report adverse drug reactions. Yeah, we have a system where you report adverse device reactions. You know, but actually that's the tip of an iceberg, and frankly, you know, there is no onus on it is mandatory for manufacturers to report serious events. But they can only do that if they learn about them, and as time goes on clinicians don't report them as much. Patients don't report them to the manufacturers. So it is a law of diminishing return. So I do come back to the fact that if you're trying to pick up things in the long term, registries are the way to go.

DR. SCHAFER: I'm just going to interject for a second so we get my point with this. So CMS, remember I was talking about before, Shami had mentioned this, we are responsive to public comment. We have an open public comment period. I will ask you all to respond to that. All right? You know, that's -- we start here. Okay?

DR. KUNTZ: Can I make a comment about the registry for one second? Just to -- sorry. I agree. I think we are making a big effort to try to develop a network to analyze data, and registries has a bad name because in the past it encompassed, you know, studies with incomplete ascertainment, poor data collection, bad analysis, and probably supported a lot of the ascendency of randomized control studies. I don't think spontaneous reporting is actually a good methodology to detect signals from devices because of all the stuff that you said. What we want to have is at least 95 percent follow-up, high ascertainment so you'd minimize incomplete ascertainment or bias, which is critical.

And then if you look at the hierarchy of what a registry can do, it certainly is a fantastic vehicle to look at adverse events because they're simple to detect. It may not be the best vehicle to look at comparative fitness though because it's confounded potentially. You can add propensity. You can do a lot of latent variable analysis and so on, and probably it's not a bad, you know, if you have high ascertainment and have a decent quality system, you probably could do some comparative type -- but there's the hierarchy of what a registry can do well. And then as you move on down, it becomes much more of an issue that randomized -- randomization actually takes over.

So that would be something I think the group could, you know, inform us on is, you know, how do you feel about those things? Because the low-hanging fruit is product performance and adverse events, which can be taken care of I think by a low-cost registry.

DR. HENEGHAN: Can I come in just on that one point?

DR. REDBERG: I just want to -- can we just -- you want to say something, and then we'll go to Art and the next -- thanks.

DR. SUMMERSKILL: This session is on opportunities and challenges, and I think it's been terrific listening to the various challenges and opportunities that are out there. And as I've listened, I've also realized that I'm listening not just as an editor, but all of us are listening as potential consumers and users of these devices. And with that in mind, what we have heard has been variable thresholds of evidence, outsourcing of compliance, rarity of withdrawal, difficulty in detecting adverse signals that come from these devices. And it may not -- this has nothing to do with the very helpful and responsible comments around this table from colleagues from industry. But I wonder if it is too tenuous to draw a comparison of regulation to the financial market where there was a very close communication in setting up how services were going to be regulated, perhaps in retrospect too close. And perhaps there weren't the clear boundaries and thresholds in place, and we now have a mess.

Now, the challenge is to avoid in device creating this kind of crisis in the future. The opportunity is that there are two days to try to figure out what are the criteria that we need to have to minimize that risk so that none of us has to face what in England we call the Jeremy Paxman question. This is a very incisive and aggressive TV interviewer. It's that one question you just do not want to be asked on national television, such as you were at the FDA when device regulation was discussed, what were your thoughts about this?

So I think it's trying to draw together all these discussions now and think well, actually, we're picking up a few worrying messages coming through, and that's terrific. That's what this is about and this now recognizing these potential weaknesses. My son is a trainee pilot. In aviation there is the Swiss cheese theory behind accidents. It's when all the holes in the cheese begin to line up. And we've heard about different holes, so we now need to think about how we can patch as many of those as possible.

DR. REDBERG: Thank you.

DR. MARINAC-DABIC: There was somebody else before --

DR. REDBERG: Art and then Marina.

DR. SEDRAKYAN: It's hard to communicate after Bill's elegant remarks. But the issue that I thought about, going back to your example, Rita, about the ICD registry, and then thinking about Rick's example, what the need -- what the purpose of registry should be, that has to be clear about the properly designed registry issue here.

The ICD registry was an interesting example. It was meant to measure whether you can look at the efficacy of ICD in some populations compared to not using ICD. I think that was original intent. And yet we ended up with a registry of procedure that never addressed the original question, which was are there any groups of people who wouldn't benefit from ICD. And then for comparative effectiveness purpose, again comparing different devices, again it depends on a particular device and a particular use pattern; it certainly can be a good tool for comparative assessment as well, particularly, for example, orthopedics and hip and knee replacement. It's a great tool for looking at the comparative performance of variety of products. So it depends on the pattern of use and how clinicians' variation in choice of a particular product.

DR. LILFORD: Thank you. Richard Lilford.

One of the main ideas as a protection against the risks that Bill identifies is to have registries, and this morning I think it was Maisel, Dr. Maisel showed us a picture of a barcode whereby all the devices would be recorded. I'm just wondering what we mean by registry? Do we mean that every -- all 120,000 devices are going to have their own registry or registry for their group? Is that feasible or possible? Who's going to pay for it, analyze and keep up to date because these are expensive, these -- registries -- the knee registry will cost a lot of money. Or do we mean we're going to have a proper system of recording the electronic notes as we go forward so that we'll be able to ask questions about all the devices that have been used in our health services.

DR. GRAVES: And if I could just jump on there. I must admit I always cringe a little bit when people say a registry is expensive and that -- because just to put it in context, in Australia a hip replacement is $20,000 -- don't -- no --

DR. LILFORD: Hips are --

DR. GRAVES: Let me finish, please.

DR. LILFORD: -- registry. I'm talking all devices --

DR. GRAVES: No, let me finish.

That the procedure is $20,000. To monitor that procedure for the rest of the life of that patient is $20, not expensive.

UNIDENTIFIED SPEAKER: (Off microphone.)

DR. GRAVES: In -- well, in -- exactly. And I think that that's part of the point that where the discussion is going is that the registries, and I think that that's really what Art was alluding to, registries can be useful, very, very useful in certain circumstances. In other circumstances they're actually not very useful because there is not a hard endpoint to look at that's very clear or that there is significant variation in the patient population. So you have significant confounders. And you see the beauty of the joint registries is that the patients are all fairly similar and that -- and there's a very clear endpoint. So that's why they work so well. But because they work well doesn't mean the other registries will work well, and so there has to be careful consideration when you're looking at a registry and establishing a registry as to whether this will work. What is the problem you're going to solve that's been alluded to? And that's very important to understand.

DR. REDBERG: So can you --

DR. GRAVES: But I must admit I think that if you look at the cost savings from registries of registries that work very well, our registry over 11 years or so has cost about $11 million to run. We've already calculated that we've saved around $400 million for the Australian community by that.

DR. REDBERG: Can you comment because some of the barriers we've heard to registries in other countries besides Australia are that patients don't want to participate, doctors don't want to participate. How did you overcome those barriers in Australia?

DR. GRAVES: The patients not wanting to participate is just simply not true. Patients are actually very keen to participate. The Australian registry is voluntary. We have information now on 800,000 people, and everyone's given the opportunity to opt off. Twenty-five patients have opted off out of 800,000. So it's simply not true.

Doctors participating or hospitals participating, what you do is you set your registry up that there is a benefit and that so what happens with our registry, as with many other registries, the doctors know what their outcome is compared to national averages. It's part of their auditing. It's part of their accreditation process. Hospitals, it provides a service to them. For instance, if there's a recall, and they want to know which patients they have to recall, if they don't have an electronic record of all the devices, which very few hospitals do, then they'll have to go through all the case notes. The other alternative is to contact the registry and get a list of the particular patients that have that device in 15 or 20 minutes. And so it helps them to meet their legal requirements.

But also, if a hospital is being accredited, one of the things that happened in Australia was that there was a part of the accreditation process, hospitals were asked how they followed the long-term outcome of their joint replacement procedures. And the only way that they could answer was to contribute information to the National Joint Replacement Registry.

What you need to do is you have to think about the benefits, but then it also has to be integrated within the processes within the government and with the regulations. So what we do is we identify how it lies and the regulatory body do through that time processes. But it's also very much entwined with pricing and determining the pricing of products. And, in fact, in Australia there is a thing where the better devices get what's called a superior clinical performance listing, and they get more money. And so -- and that's a positive to try and keep those devices on the market.

So it's really integrating the system. And I think that people talk about barriers a lot, but often they talk about it without very much information. And I think if you look at it in a very positive way, that there are in fact very few barriers.

And one of the things that is very important, because you've mentioned about cost and you saw me rile on that, is that the costing in Australia as with the U.K. registry, the government pays the cost of the registry in Australia. But what it does is cost recovers by their billing companies and that -- and the company's actually quite happy to pay that because they get really good service in knowing what the outcome is for the joint replacement being used.

DR. REDBERG: Thank you. That's very helpful, and certainly we do spend billions on devices and not so much on registries here.

Yes.

DR. VEGA: I would be remiss if I didn't speak -- if I'm schlepping from the outer boroughs of New York and not to your point, which thank God you said something. Okay. Patience. Education. Patience. Involvement. How do you take a person who's been diagnosed, okay, and turn them not into a victim, not into a client, not into a patient, but into an advocate? Ladies and gentlemen, it's very, very, very easy. Okay. And you empower them. How? Not with Wheaties. You empower them by telling them that if they want to live and help and see their children and grandchildren develop as -- okay, they have to get involved in their survivorship. Okay. And so what I say to them is you must believe in one's own survival in order to survive. You have to get off your backside and do something to survive, and the way that you do that is very simple. You learn that if you help yourself, you're helping the family. Moreover, if you get involved in other people's survival, which is what I call my -- you help others to help themselves, and in that process you are holistically healing yourself.

I am of the age, 69 plus, where the doctor, God bless him, came to your home. How did I know the doctor was coming? Very simple. My grandmother started cleaning. The food came out. The place was clean, and God forbid you had to regurgitate. You had to do it in the bathroom because you couldn't do it there because the most important person in your life, someone that you looked up to, someone who was going to be a mentor -- in my case, I was fortunate, it was a woman -- a lady doctor, a pretty lady doctor. They came and listened. They helped you to get into the environment of healing yourself, really, really -- and I sincerely believe that so, so often we talk about the patient as a nonentity and not -- we can go in the elevator and hear someone discussing bed three as if the person had already atrophied. Okay. They were dead. I'm telling you that we have a tremendous resource within our patient body, within the body of the patients because they're not just patients. They're people.

And as you so clearly said, here I am, this lady from the TV, blah blah blah blah, who ended up with cancer three times. So I became a very big consumer, and I learned very quickly that if I was going to take a backseat, okay, I wouldn't survive and they -- my mother, grandmother, sister, all the people in my family, by the time I was 15 I was orphaned because everyone had died of breast cancer.

I got a scholarship to medical school, had to leave there when I was diagnosed myself. So I went back. When they close the front door, the window in the back is always open, and you break in.

As the woman was asking this morning, lady, life sucks, and then you walk both sides of the street, and that's really the truth. But we have to give our patients the right to understand that they have the need to involve themselves in the process. I've sat here -- involved is really, really different, because otherwise my blood pressure would be up. But I've sat here in other conferences -- here where people talk about the need not to disclose to the patients because, in fact, they're not discerning and it causes anxiety. Right. They don't have anxiety otherwise, just when they read the statistics, right? They're not sitting up 3 o'clock saying am I going to die? What am I going to -- no, no, no. Just when they read statistics.

Come on. Get over it. Let's be really, really honest. We have the power in this room with the registries and with the patient information that we can share with developing patients -- okay, not everybody wants to be a patient advocate. Not everybody wants to be involved. But if you talk about their longevity or their family longevity, and you involve cultural diversity and sensitivity, and you understand the culture, guess what? You got a team player. And I'm telling you that's really important.

So -- invite me back, but I'm telling you that it's a very important piece that we can bring up here, and we will really work on it. Thank you.

DR. REDBERG: Thank you. That was great.

We have a few more minutes left, so we could have maybe two more comments, and I think there were a few around the room.

Jeffrey.

DR. BARKUN: It's difficult to make a comment after that wonderful comment. I think you've heard that the companies don't really mind registries that much. The people who have the registries like the registries. The patients like the registries. And when I speak to my friends who do surveillance of medications, their problem is patient takes medication A one day, B another day, C another day, so when they do a -- or when they have a long-term, they can't assess exposure. So statistically it's very difficult. We don't have that problem. The same thing that makes it that it behooves us to follow the patients long-term actually makes it easier, easier to assess a possible causal relationship. So there's no statistical argument to not do it.

So I think the answer from the point of registries seem to be fairly obvious. There's two issues I want to bring up. One is what type of registry do we want, and I would put a plea to not just a device type registry but either an operation or ideally a disease-based registry. And the idea there is that you can't necessarily think that the device is the key. The key is everything that's going -- basically the patient and what else is going in. And if ever you want to do some type of comparative study in a very, very late phase, if all you have is the term zero of that device and you don't have what's been going on at the same time, you're stuck. So I think that we have to think a little bit out of the box when to accept the registry type to have it, you know, either operation or disease based.

The last comment I have is for what I thought we were discussing, which is the conditional approval, rather than the latest phases. But it sounds to me that if the FDA is there and gives approval and can give conditional approval, I mean Medicaid decides whether it's paid -- or not or Medicare, sorry, decides whether it's paid or not, and probably most of the insurance companies follow eventually what Medicare does, I really don't see what the problem is. The problem is one of will to try and say wait a second. You know, this is not good enough, and we have to go out, and it's the only place that there is a problem.

Now, it's easy for me to say that because our system is a bit different perhaps in Canada and so on. But I think from this point of vie , the answer is not methodology, and we just have to go ahead and do it.

DR. REDBERG: That's great. And actually, I think your first point addresses what Art brought up earlier about if you have an ICD based registry, you don't know what happened to the other patients. But if you had a congestive heart failure based registry, you'd know what happened to the patients who got ICD, the patients who didn't.

I'm sorry. Peter, you want the last comment?

DR. McCULLOCH: Yeah. I basically want to endorse what Jeff's saying. I'm delighted to hear from somebody with so much experience of registries, of their efficacy and cheapness. This is something I've been campaigning about with cancer registries in Britain unsuccessfully with our cancer czar for years. But as Jeff pointed out, we have unwittingly diverged from the subject a bit in that the topic was coverage with evidence. Now, I don't think that's a huge problem, but if there is any possibility over the next few minutes of looking at this in terms of what else, apart from registries, could usefully be put in place in the FDA context to supply coverage of that evidence, I'd be interested in hearing that.

And just one final point. I mean I think this brought back your initial comment, Rita, that actually getting compliance with registries is a matter of sticks and carrots. It's a matter of making it worthwhile to all the people who have to cooperate for it to happen. And so that's a relatively straightforward piece of social engineering, I guess you might call it, politics, outside of my realm completely. But it's not a scientific or methodological problem.

DR. MARINAC-DABIC: Another issue in terms of registry is then we have really some great examples. I'm going to talk about that in the next session about the registries. But even for the ones that really work, there's a problem with an access to registry. For example, it so happened that we do have full access to Australian registry, and we don't have access to many of the U.S. registries, which again speaks to the silos and the fact that even though we have criticized the silos in the morning presentations, we still speak just wearing our own companies' or agencies' hats. And I think until we agree that the patients' safety is really shared responsibility, and irregardless of the differences in statutes that apply to CMS or the FDA or the European agencies or U.K., I think this meeting and the issue that we discussed today is really about the totality of the evidence appraisal. And I think from my perspective, what we need is really solid infrastructure, solid methodology, and building the partnership that can actually take advantage of both methods and infrastructure.

DR. SEDRAKYAN: I'll ask a question. I can pose a question. So if registry methodology were to be acceptable and well understood by regulatory bodies, can this help with condition of approval mechanism to be used more often? So is this something that would be more appealing to regulatory agency if the registry concept were to be embraced by industry well and clinical community well? You know, going back to a conditional approval and couple it with evidence development issue. There's such a problem with CMS because I know CMS for a long time didn't really favor observational research.

DR. SCHAFER: Favor it or didn't favor it?

DR. SEDRAKYAN: (Off microphone.) Favor.

DR. SCHAFER: Art, it really -- again, we get back to the fact we're in a rapidly evolving healthcare environment. Okay. Right now, coverage doesn't consider cost. We do not consider cost for coverage in the United States. Is that going to be true a year, two year, three years from now? I'll let you think about it. So I mean ideas are evolving, aren't they? And it depends on the device, the outcomes, again, patient population. I can't give you yes or no.

DR. MARINAC-DABIC: If I can just make one quick comment. So we heard example of this early feasibility guidance document, and we also heard Phil's comments that this is within our regulatory authority, and this is why we are issuing the guidance. But how many of those other missed opportunities that we as the FDA had and didn't utilize and why and other agencies as well? Because nothing changed, but we are now here at the situation, at the time when we can actually write this guidance, even though nothing have changed, that either forced us to do that or allow us to do that. So I think it would be useful if we spend some time during today, the rest of the day and tomorrow to figure out where are those other potential areas where if we choose to be a little bit more creative or a little bit more, well, actually creative in interpreting what the regulation is, we might be actually able to jointly move the field forward, and not only from the FDA side but also from the, you know, CMS' side and the Europeans and Japan and other countries that are here.

DR. BRUCE CAMPBELL: You just saw me despairing. I've just been turning to Sue and say we've been saying this for years and years and years. What's fascinating to me is having been banging our head against brick walls in the U.K. for registers for just this purpose for seven or eight years is to suddenly, as you've just said, noticed this certain sort of change in mood, and I'm just sitting here saying what's happened because five or six years ago we were not hearing any great enthusiasm for this. So I'm just fascinated, as you are, as to what's changed.

DR. MARINAC-DABIC: Well, I think maybe a good --

DR. RITCHEY: So I'm noticing that we're overdue for a break, and I know that Danica's next talk is really exciting about talking about innovation and innovative ways to do this. So maybe we can take a ten-minute pause and come back and start with that.

(Off the record.)

(On the record.)

DR. GROSS: Ready to start, the next session that is. Okay. The next session is entitled "Innovative Approaches for Postmarket Evaluation: At the Cutting Edge." We have two speakers for this session. It will be followed by structured small group discussion. Our first speaker is Danica, who will be talking about contemporary postmarket approaches for devices, the Medical Device Epidemiology Network.

DR. MARINAC-DABIC: Good afternoon. I do have quite a number of slides to go over, but I'll try to just hit the highlights because I would like us to have enough time to break into three sessions and try to build on the great discussion that we had so far. So here's the summary of what I was planning to talk about, just to give you a little bit of a public health context and need for more modern type of postmarket surveillance. Then I will talk about one of the recent initiatives that we started here at the CDRH called Medical Device Epidemiology Network, or MDEpiNet for short, with a focus on innovative methodologies for studying medical devices, focus on building infrastructure for postmarket, and also with a much larger focus on cooperative work between agencies and stakeholders, and again, getting out of the FDA silo if you wish. Also, I'm going to talk a little bit about the added value to the regulatory science and our vision for the future, which I hope will serve as a good setting the stage for the discussion for the small groups.

So we touched a little bit upon what is the postmarket context in the device arena, and I just would like to go one more time over what type of studies we actually are looking at, and who is doing what studies and how the responsibility is shared between us and industry. On the left side you have FDA mandated postmarket studies, and those fall into two categories. Ones are ordered at the time of the approval, and we call them post-approval study or conditional approval studies. The others can be ordered any time after the approval if there are certain conditions that are made, and certain safety issues have been raised, and we want a company to address it through a formal study. We currently have over 150 ongoing post-approval studies and approximately the same number of ongoing Section 522 or postmarket surveillance studies.

In addition to that, FDA is sponsoring quite a number of original research studies that are done in instances where the question didn't come that it's related to a specific PMA or a specific device, but it's more of an overarching issue or if there's a need for development of the better methodology to advance the postmarket questions. So then we would do this type of a research, and we would synthesize the evidence, or we would try to explore different ways of how we can inform the CDRH regulatory and public health decision making.

But overarching, and you probably are familiar with the Sentinel Initiative and what the goals are, again not to keep -- not to lose the sight of the reason they're involving infrastructure, that is building for the sustainable surveillance for medical devices and other products; and, of course, the newest initiative that we call MDEpiNet.

We did recognize the need that we need to strengthen and organize the evidence appraisal in the postmarket device context. We also recognize that there is a need for systematic and active collection of data to create a robust body of evidence. What I mean by that, there are good examples of good efforts by industry, by the FDA and other stakeholders, but many of those efforts are initiated based on a specific postmarket question, for example, registries initiated and closed upon completion of the FDA mandate, or FDA initiates the study and closes it after the question is addressed, if that data and the infrastructure doesn't remain in place, and it doesn't serve to augment the totality of the evidence that FDA can potentially use in the future.

So as epidemiologists here at CDRH, we are very much involved in studying utilization and diffusion of medical devices into clinical practice. We study patterns of use in different populations. We also are designing studies that will address long-term safety and effectiveness for medical devices that would better refine a benefit/risk profile in a real-world setting, focusing on issues that still remain after the decision is reached that this particular device meets the reasonable assurance of safety and effectiveness at the time of the approval, meaning some of the learning curve effects need to be studied, heterogeneity of treatment effects, comparative safety and effectiveness in the medical device world, and also we're looking also into the public health impact or burden of medical devices.

But there are lots of challenges on the way of really being a good epidemiologist in the medical device regulatory setting. Technology and innovation evolve rapidly. We heard that over and over again. And that really poses a question on how to design good studies, how to capture evidence and help get ready to capture this new, evolving technology.

We also talked about the silos of information, and here I was focusing more of a FDA regulatory pathway, but this example is applicable to other silos that we talked about, regulatory versus, you know, in this particular case is U.S. CMS or Agency for Healthcare Research and Quality or NIH. All these agencies have their own mandates and not enough, I think, we communicate and share the information for the benefit of the patients. And this path actually should be a continuum, not silos.

We have lots of data sources for medical devices, and I'm not going to go into details on the benefits or limitations of each. The point that I'm trying to make is that there is not enough methods that have been developed nor utilized to actually integrate the data that resides in these different data sources so that anytime when CDRH makes a regulatory decision in a postmarket setting, we can say that we actually looked at every available evidence that is relevant. We certainly utilize different study methodologies, and one can think of potential benefits of really combining the evidence that resides in those different studies to actually augment the tools that we currently have in place.

So our vision is to use all the evidence for the decision, to take advantage of the methodological advances in evidence-based medicine, health services research, certainly epidemiology and statistics as the backbones of the public health practice. We also recognize the differences between the devices and drugs, and many speakers spoke to those differences today. We also need to recognize the international context of the device used and challenges and opportunities specific to devices.

So this is why we thought of creating a consortium that will help us think together, join our knowledge and expertise with our colleagues from all stakeholders, primarily with academia, and then having an input from all other very important stakeholders including funders, payers, industry, patients, clinicians, that would help us prioritize the research, would give us an opportunity to take their point of view when we design the studies and really put together an integrated framework for postmarket. And this is very timely as the IOM is now really advising us to reinvent the way how we look into postmarket devices. I think this is a timely initiative that can add a lot to the new way how the FDA is going to do our business.

So basically MDEpiNet Initiative is -- what this means is the creation and support MDEpiNet, FDA academia/epidemiology consortium, to advance innovative methodologies for studying medical devices. Even though we call it epidemiology, I would like to make it clear that we recognize that this is a very interdisciplinary approach. We're looking for clinical, statistical, and other subject matter expertise as we try to design good studies that can be actually executed.

So at this time, we have the following academic sites that have initiated the process to actually formalize the relationship through the formal consortium. And as you can see, those are very prominent universities in the United States that all have documented and certainly great record of doing wonderful methodological innovations in the area of medical devices. And we also started the discussions with the University of Oxford as we're hoping that we're going to be able to actually bring them as the first international university that will join the network.

In addition to universities, we also have established relationship with many professional societies. I have just listed some that work with us very closely through contracting mechanism or different types of cooperative work: American College of Cardiology, Society of Thoracic Surgeons, American College of Chest Physicians, American College of Surgeons, American Society of Plastic Surgeons, and others.

In addition to that, we collaborate with other government agencies more and more, as we realize that we cannot do our job properly if we do not put the regulatory science and safety and benefit of medical devices that we approve in the context of the postmarket safety for medical devices.

The mission of MDEpiNet initiative is to develop the infrastructure and methodological approaches for conducting robust studies to improve medical device safety and effectiveness understanding throughout the device life cycle. So two highlights here I think that are important, this focuses on medical devices. We know of many, many initiatives recently that have been launched that have pharmaceutical angle as a focus. This is solely focusing on medical devices because we believe that there had not been enough done in the past to boost the building of methods and infrastructure for medical devices. And with medical devices, again, procedures go hand in hand. We cannot assess the performance of medical devices if we are not looking into entire context.

Also it's another highlight here that it's important to say we talk about total product life cycle, meaning that even though we're looking in the postmarket, the ultimate goal is really to feed back this information to the postmarket setting so that industry and investigators will have the best information available as they design the new studies for new generations of medical devices.

The objectives of MDEpiNet are to improve the paradigm of how medical device knowledge is utilized throughout device life cycle. And again, using the word "paradigm" that all of us, as Peter pointed out, interpret maybe in different ways. But from our perspective, it's important that you recognize this is a unique opportunity to actually change the way, how we have been -- we make the decisions in the regulatory setting. As opposed to traditionally, you know, having these trials and making decisions upon completion of each part of the review cycle, we're advocating to actually look at the totality of the evidence, as in the postmarket setting we need to do to make a certain decision for a medical device.

We also would like to leverage partner resources and expertise and create a partnership that is sustainable and infrastructure through which all stakeholders will continue to gain valuable knowledge about medical devices. And by this I mean that we have created already an advisory group that helps us put together the business plan for the public-private partnership. Ultimately this is going to become the partnership of all entities that have interest in approving safety and effectiveness of medical devices. So we have, you know, on this committee we already have -- from each university we have one member. We also have members from industry -- sits on this committee as well. So we are trying to make sure we're going to soon have a patient representative on the committee as well. So the goal is we define this infrastructure to have all these important stakeholders' positions in mind and not to build something that it's just going to be suitable for making regulatory decisions.

And we would like to be fully integrated in the systematic evaluation of medical devices and the CDRH decision making. And by this I mean we certainly have a history of doing good research and published papers and all that, but in the past, we have not been part of -- enough, I would say -- a part of the formal decision making, and this is going to change. So some of the approaches that we would like to take under this initiative is to systematically evaluate evidence of risks and benefits and not ad hoc, not to wait for the crisis to happen, and then go and look back and evaluate the evidence and figure out, you know, what was done wrong and what we would have done had we known what we know now. But really have the system in place that will systematically evaluate the new information as the device moves from the certain phases throughout the TPLC.

Cooperate with external parties and I mentioned, you know, all of the different stakeholders bring different talents. We would like to develop and test innovative methodological approaches for medical device research and regulatory science. And I know there have been a lot of new, innovative methods already discussed today, but there are still a lot of things that we still can work on because many of the methods that are used were developed for pharmaco, epi, and some unique challenges for medical devices, especially in the world of when we don't have unique device identification.

Dissemination of findings to all stakeholders is also important so that we can be very transparent and that things, even if they're good or bad, I think it's important to share them with the public because only timely sharing of this information can actually help us move forward.

What we envision under this paradigm is that epidemiology programs at FDA and epidemiology sites and academic partners and certainly with an input of all stakeholders will work very interactively. This is -- we almost envision this as an external part of our research program and when -- we have confidential disclosure agreements, for example, signed up with universities already, so they can take a look and, you know, sensitive information and help us actually move the methods forward and come up with the best strategy to address a particular question.

And through the systematic appraisal of the evidence, we can identify new studies needed or what the next steps are going to be. A good example was the recently published paper in BMJ a couple of days ago when he worked with two or four sites, Cornell and Harvard, to evaluate the evidence of -- compared the effectiveness of hip implants of various varying surfaces, and again, I'm happy to have many of the coauthors actually here present today. Again, Steve Graves' work with us as well is always a great supporter of this innovative approach. So, again, that's one example of how this will work.

Focus on infrastructure. I have chosen just two examples, and since we talked about registries, I think it will be time to maybe just go very quickly through some of the registries that we already have in place, and then we work closely with other entities. CDRH uses registries for post-approval studies and surveillance, and some examples that I've listed here are, for example, bullet number one talked about good examples of the existing registries, that they're not created for meeting the mandate post-approval study, but we were able to nest the post-approval study in the existing registry. So INTERMACS is a good example. We have a couple of post-approval studies being nested in this registry. Also Kaiser Total Joint Replacement Registry and Australian Registry also serve as venues for post-approval studies.

In addition to that, we facilitate new registry development. We have put some seed money to the development of the Atrial Fibrillation Registry, development of the data collection tool. We worked certainly with our orthopedic colleagues to facilitate the development of the National Orthopedic Registry and so on. I'm not going to go into a lot of details in those, just to illustrate that the FDA is very active in advocating for the registry development.

We use existing registries for discretionary studies, and I listed some here, and based on some of our studies, some products had been actually withdrawn from the market based on utilization of the registries to conduct the study. As you can see, there are some international colleagues and international registries represented here that we always like to showcase as great partners in this type of effort. We're constantly exploring registry capabilities, trying to figure out are there any active surveillance softwares that we can actually utilize in looking for signals in those registries. We are actively working with ARC on their guide of -- on registries as coauthors of some of the chapters, although I might say late on some submissions, but we are part of the team.

And then finally, you know, we build methodological infrastructure for registries, and great example for that is International Consortium of Orthopedic Registries. So this is the meeting -- the picture, group picture from our recent meeting here in May when we invited 29 registries from all over the world, from 14 nations. These registries all contain the information on orthopedic implants, and they represent more than 3.5 million patients. So these registries worked very hard during the last year to come together and come here to the FDA and to make a strong commitment and message that they're ready to work with us on development of the scientific infrastructure for addressing the postmarket concerns in orthopedic world. And we have just recently awarded the contract to Cornell to be leading the actually FDA Science and Infrastructure Center that will be in charge of coordinating this International Consortium of Orthopedic Registries. And hopefully this is going to be a very innovative way of how we can follow and/or amplify the signals or look for the solutions to address some of the important postmarket questions.

And then I'm quickly just going to go, maybe two or three more slides, about some of the methodologies that we are trying used that we haven't used routinely before. One example of that is evidence synthesis initiative that we launched also last year. This is the paper that Art already presented so I'm not going to -- I'm going to skip on that. But some of the innovative methods that Professor Charles Normand from Harvard, again another MDEpiNet site, is leaving is to make a better use of existing pre- and post-approval data and to actually simultaneously apply some of the known methodologies to integrate the data from the both premarket and postmarket and to help us actually use the data that are not so perfect in medical device world but not to dismiss the data that are not really -- even the data from bad studies we would like to be able to actually capture through this modeling.

So we have published this last year as a preliminary analysis, and we're about to actually submit for publication the full analysis of this utilizing the data from premarket clinical trials, post-approval studies, data from Australian registry and CMS billing data on hips, trying to put together these data in two models that will give us more information on certain subgroups that we didn't have at the time of the approval.

And we talked about MDEpiNet as a partnership as a third focus of these comprehensive efforts to revamp the way how we do the postmarket studies at CDRH. But unique role of MDEpiNet will be that we expect that it will provide the tools such as, you know, various study designs for distributed network-based research collaborative work, to advance analytical overall matters such as multilevel analysis -- that certainly look into hospital, surgeon, and patient facts, and many other innovative methods that had not been systematically applied in the past.

We also thought of how this can help actually industry because we would like to be able to be useful and learn what we -- how the MDEpiNet can develop the tools that then later can be actually helpful for the new sponsors that bring the new devices for the approval. So we envision, and this is just one example, we envision that, for example, for the company that is thinking of submitting a new PMA and orthopedic devices, we would be able to look at all available data sources for clinical device area and work with sponsors to actually give them the inventory of what other data sources that might be at their disposal to learn more about how to design the study, what needs to be done and what -- how they can supplement their premarket application with this data and then how later they can use that infrastructure for meeting the FDA postmarket requirements. We will evaluate the quality of these data sources through MDEpiNet, making sure that that inventory is readily available to the industry. Maybe orthopedics is not just a good example, because there are so many great registries, but there are certainly device areas, what many small companies do not have a good understanding what is at their disposal to meet the postmarket requirements.

We believe that the benefits will improve the knowledge of extent of observational clinical data that is available for analysis, and I know we all know the quality of the randomized control trials but also know the limitations in the real world. So having better methodologies how one can use the observational data for postmarket, I think it's very important, and we know that not every company has an epidemiologist on board, not a statistician on board. And I think we would like to be working more interactively with industry to be able to guide them so that at the time when we receive our submission, that submission is going to be in really good shape, and it can go through the process and we can -- at the time of the approval, we will have that post-approval study that's ready to go.

We would think that this type of collaborative work will reduce the burden on the sponsors and also the burden on the FDA review staff. Once you get the submission that it's really good, and when there is a clear direction, and when we all are very knowledgeable about all the potential issues that are related to this particular submission, I think we have a high chance to actually design good study, execute the study, or utilize the registry for nesting the study.

Once this MDEpiNet is fully operational as public-private partnership, that is going to the venue for all stakeholders to have their input on a continuous basis.

Vision for the future, we would like to make this MDEpiNet as a CDRH resource. We would like to make sure that those things are interdisciplinary in nature so even to include clinicians, engineers, human factor specialists, epidemiologists, statisticians and other profiles and augmented certainly from the external sources. We would like to be able to integrate that with other CDRH programs so that, you know, when we talk about spontaneous reporting, when we talk about, you know, other types of studies and mandates we have, that there's going to be clear integration of how this data is going to augment or leverage of these other efforts. We believe this will lead to better premarket studies above everything else because ultimately if that information is fed back to the new cycle and the new products, that is going to be helpful for us and industry and other stakeholders and to advance the regulatory science.

So basically these are the highlights. What is different? You know, it may sound as something that is so logical that maybe you're wondering why I'm making such a big deal out of something that it should be really natural extension of connecting the research and regulatory science. But what's critical here that we hear proposed systematic identification of all relevant data, not ad hoc. We're proposing innovative research infrastructure and not traditional, old-fashioned post-approval studies. We're talking about linking registries with administrative billing data and such.

Also what's different here is that this is the focused effort centered around device content areas. This is not speaking in general. We are having teams of experts here at CDRH working with teams of experts in academia and having relevant people from industry and patients and other stakeholders. It's expertise driven, meaning that we have really great experts on all sides. And also, more importantly, this should be -- and it's envisioned as a dynamic integration and synthesis, something that anytime the new information becomes available, we would like to be able to actually update it and share the information. And we would like to think that this is going to be critical for CDRH decision making and not just some research exercise that it's going to be only published without being implemented in the policy and decision making.

And this is just to advertise some future conferences we have planned for this year for those of you who are interested. The dates might not be completely final, but this is the list. 522 studies conference in February. MDEpiNet annual conference in April. Post-approval studies conference in May and registries conference in June. All of these are public, and you're going to have Federal Register notice, and all these are going to result in some kind of either white paper or publication report.

So thank you.

(Applause.)

DR. GROSS: Okay. Thank you, Danica. Our next speaker is Jonathan Cook. He's a methodologist at the Health Services Research Unit, University of Aberdeen in the U.K. And he'll be speaking to us about IDEAL recommendations for Stage 4.

DR. COOK: Thank you. As the last speaker today and also someone who's just getting over jetlag, I think it's very much my duty to speak under the advertised time, and I will take on that task with some glee, and I'm hopeful that I will achieve it by a few minutes to spare.

This talk is very much a compendium to Peter's this morning. It's looking at the later stage of IDEAL, and those who are still sharply focused might notice the title has undergone a little bit of a change. I've actually taken on board some of the discussion this morning. I've revised it slightly. So if I do put you to sleep or bore you intensely, at least one opportunity for those who have a packet is to play a game of spot the difference, and you can go through each slide and see if you can find any differences.

But I wanted to start recapping a lot about what -- and Peter said this morning about IDEAL and I just wanted to emphasize, which possibly reflects part of the tension between when we talk about the device saying and when we talk about IDEAL is this ethos of continual evaluation, that we are thinking of surgical innovation and the fact that it's not a one-off evaluation, but that process is going on as we learn more and more. And arguably from a procedure perspective it never stops. And you might also argue that's true to a certain extent also of medical devices, or at least it should be.

Another thing to say about IDEAL is it's focused on surgical innovation. This has been picked up before, and this encompasses medical devices, but obviously it's not the same thing. It's starting out from the perspective of medical devices and looking the other way, which is a natural perspective for some people to take. For example, from industry as the designer and the manufacturer of a device, that's the perspective you naturally take. And from a regulatory body that has a responsibility to regulate medical devices, that's the perspective you take. From a wider health perspective, clearly it's how they're used in practice that we're interested in, and from the patient's perspective that's how they approach. If I can put it crudely, a device is just a thing that's used to do what needs to be done, is very much the way that I would kind of look at medical devices, and I think patients typically do. And sometimes those two things coalesce quite nicely, but other times there is a clear distinction.

So I'm going to talk a little bit about Stage 3 and Stage 4 in the IDEAL setup here, covering a lot of recommendations from the earlier stages this morning. And I've had to really include 3 here as well as 4 because in terms of what you might call postmarket surveillance or post-approval evaluation, it's encompassing what is under Stage 3 and also what's under Stage 4, which is long-term study or surveillance, depending on what terminology you want to use. So I just want to take the opportunity to just think a little bit more about device and procedure and the interaction without really giving a resolution but just the choices to be made. And obviously there's a link between device and procedure and it's there. It's natural and it cannot be treated in isolation from each other. We can focus more on one than the other, so we can make the device the main focus of a particular study, or we can make the surgical procedure that is associated with the main focus. But we certainly cannot ignore the fact that when the device is used, there's a procedure if it's an implantable device or it's a device that's used in a surgical procedure and the other way around, the devices exist, and they are in fact maybe enabling the surgical procedure to be possible.

So the impact of a new device, if we want to think about it that way, the impact of that upon a procedure can come in different forms. The device might need a new procedure, so the procedure might not be possible without the device. So a pacemaker is an implantable device. Without the pacemaker you're not going to do the procedure. Or it may substantially alter a current procedure, and this is very much the scenario of a lot of the examples perhaps we've talked about cover that the procedures already exist as well and well use, and then a new device comes along that leads to the procedure being altered in some way. So, one example might be for knee replacement, total knee replacement existed for a long time, and then there was devices for partial knee replacement available. So that resulted in a substantial alteration in the surgical procedure that was enabled by a device that suited that alteration.

And then we might characterize like-for-like replacement where we're talking about the procedure in all intents and purpose is not altered too much, but there may be a slight tweak, or the new device may be cheaper. It maybe easier to use, and maybe it's an improvement as well, but we're talking about minor possible improvements here if at all.

The second thing I want to pick up which has been bubbling away a few times in the background is IDEAL and RCTs, having talked about this device and procedure interaction, and that's their -- first they -- to somewhat resolve some of the tensions that were in our discussions, RCTs come in different shapes and sizes. So there was some discussion this morning about when an RCT is and isn't possible, and the answer might be yes and no, but people are meaning different things when they're talking about it. In fact, there are already different types of RCTs, and when you talk about medical device RCTs, you're talking about something quite different from a pragmatic trial that's taking place when a procedure's in wide clinical use. And RCTs very much fit in the IDEAL framework, but where they fit into is mainly the evaluation and assessment stages.

This was very much a part of the whole discussion when we had our meetings before about when RCTs could be done, when should they be done, what role could they have. I'm going back to a point that was made this morning about this tension between mandating what you think is feasible versus what you think would be the best. And this tension very much came in our discussions, and we felt as a group that we had to accept that RCTs have a role to play and that some of us may feel very strongly that that role should increase but also that our -- if we look around our self, realistically there are many cases where RCTs have not been the determining factor about the widespread uses of a procedure, and to some extent that's true as well with devices. And IDEAL has to some degree chosen not to get involved in that debate, recognizing that theta has become not only the default but the mandatory in the pharmacological saying. That's because it's enforced by law. And that is not -- that's something that we worked with them, but we want to recognize that there are different types of studies and that RCTs can play a valuable role. Observational studies or registries can play a very valuable role in the assessment process.

And there are other possibilities. They were touched on this morning, and IDEAL is not about what you can do, but it's about setting up a system that would be a step forward from where we are now. And it's not in any way saying well, you can't do something better. If you're able to do it, that's brilliant, but we are looking at setting out a pathway that could be implemented in widespread use.

So if I move on then to talking about Stage 4 recommendations and somewhat reflecting the fact that Stage 3 assessment is in very much case a lot of the studies that we want to be done are done, and the issue is more about the timing of when they occur. I'm not aware of the discovery and concerns about medical devices. It's about that information coming to light later than it should have come, that in some extents eventually often the studies are getting done.

In Stage 4, this is where we felt we could make clear recommendations, and these are going to pick up very much from what's being discussed already this morning. And to some extent, the idea, no pun intended, the idea that this is innovative -- suddenly we're -- in terms of our discussion this morning, maybe the only added value here is putting them together so that we can facilitate the discussion that's going to take place next. One of those areas where there was consensus that we needed to improve on was ascertaining exposure, and this has been talked about this morning, and a key step forward in that process was -- to us about the barcode -- the use of medical devices, which clearly is a step forward. But as was also pointed out, it needs more than that, and it's actually how they're used, how they're going to go into the clinical notes and what process is going to be set alongside that. And also not just is the device used, but we're talking about standardization of terminology and how it's used, what procedure was it linked to, and then how was that quoted in standard, routine medical databases or registries, however you wish to describe them or -- and standardize the capture systems for insurance and other purposes. If we can move towards a more standardized process, we will facilitate this long-term surveillance, and we will be able to identify problems quicker and more readily.

The next point is just to emphasize the place for enhanced active surveillance, that surveillance does take place and there is studies that do this. But it seems clear from recent examples that this needs to happen more often and it needs to be -- has been tried to be set out in the past risk-based, but we need a more proactive approach towards this. And you might say a registry is a preferable approach or is preferable in terms of what is more typical. This would be a large step forward where it's appropriate where we have a medical device that warrants this sort of attention, and the more natural place to put it would be at procedure level, which is typical. And we're able to record key variations and the device usage, and that can be adapted over time as new devices come online, which may employ new adaptation of the procedures or procedure.

So that's all I wanted to say at this point. Thank you very much.

(Applause.)

DR. GROSS: And thank you. Do anything -- so the question is, I guess for Danica, it's about 4:15. Would you like to entertain some questions, or would you like to go into structured small group discussions?

DR. MARINAC-DABIC: If there are burning questions, yes. If no, we can maybe move to the think tanks and, you know, address the questions in that venue.

DR. RITCHEY: Okay. Seeing no burning questions, we're going to do three think tank discussions, and we're going to have -- yes, we're going to split into three groups for this. So if you are a regulatory person, then stay in this room. If you are part -- oh, sorry. Did this change?

DR. SEDRAKYAN: We changed the way we're going to split. We're not going to split into stakeholder -- like stakeholder groups. We decided to keep it more like heterogeneous, each group to be more heterogeneous. So maybe right side, left side, middle or left side, right side, end.

DR. RITCHEY: So in other words, if you want to stay in here with Art, then stay in here. And if you don't and you're on that side of the room, then you'll go to the --

DR. SEDRAKYAN: (Off microphone.) 1506.

DR. RITCHEY: -- 1506. And if you don't and you're on this side of the room, then you'll go to 1507, I do believe.

DR. SEDRAKYAN: (Off microphone.) 1504.

DR. RITCHEY: 1504. Sorry.

(Off the record.)

(On the record.)

DR. RITCHEY: Okay. A few logistics before we get started in the last session. The cabs to go to dinner will be here about 6:30, and so we need to wrap up before then. Also, for those of you at the table, you have four new sheets of paper in front of you. One of them is an agenda which includes the agenda for tomorrow on it. We're going to begin at 9 a.m. at the Kirkland Center at the National Labor College, which is just down the street. The other three pieces of paper in front of you are the three case studies for tomorrow morning. At 9 a.m., we're going to begin by going through these three different case studies to talk through them, to talk about how the TPLC approach at FDA and the IDEAL framework can be partnered together. So these are the case studies that Dr. McCulloch had mentioned earlier that we're going to go through tomorrow morning.

And I think our last session is to walk through what was discussed in the groups, and so I'm not sure who wants to start.

DR. BOUTRON: I can start. So for our group, the perfect system, that would be that all the information would be on an electronic database. So we would have a unique identifier for the device and we would have -- the database would track all the information what happened to patients. So all the clinical information, all the information related to the surgical procedure and all the -- information. And so we would have a link between all the record that I use for clinical care of patients, and we could have also automatic information sent to clinicians, sent to surgeons and sent to patients, and so the patient would have a message saying that they need to complete some -- to register some data -- to their health. And we would need to have a core set of outcome measure that would be patient centered. And this system would allow to do some nested randomized control trial, for example, or to do some specific analysis to identify -- address event or effectiveness.

So I don't know if, Richard, you want to add anything or --

DR. LILFORD: Thank you, Isabelle. No, that's fine. Then we went on to the barriers to making this dream of this electronic patient record, which would do everything, make that actually happen. And the first one which surprised me slightly, but this was evidenced-based culture among surgeons, which apparently is imperfect. So the next one, next barrier to this magnificent Utopian dream was the compatible IT systems. How are we going to get systems which really can, you know, compatible among hospitals and between hospitals and the community such that this sort of automated database could be constructed. But that's a problem. We'll come to overcoming the barriers in a moment.

The next area was that is it's easy to say but actually much harder to do a system like this. If you look at electronic records, they are very incomplete and in some ways inaccurate, and one would want to overcome this in a system like that, so it may not offer as much as it first promises because of the fact that information is often inadequate and often not coded. So it might be present in free text, and that would create certain difficulties.

Then came a sphinx of a problem which is this, that if you had such an amazing system of multiple -- of patient records, tracking devices over a great period of time, well, then you've got so many possible comparisons to do, so many signals, that many of them are bound to come out positive. And so you have a big problem with false positives, a big methodological issue, which you wouldn't have any more target or database set up as more specific circumstances. This would be particularly a problem if we followed one idea that Trish Groves, which was to have the, to have the computer system automatically looking for signals, creating its own data and identifying when something seemed to be going wrong.

And the last problem we identified or last barrier was of course the one that you'd -- that always appears when you have confusing databases, and that is the ethics patient consent agreement. And this would be especially if you weren't just using it as a passive archive but interacting with patients so using it to automate randomized trials, to sense when you need it to send patients data collection forms in some way, to collect more information, patient-defined outcomes, for instance. What would patients say about that?

So those are the barriers. At this point, we only had 90 seconds left in which to work out how to overcome the barriers, but that was quite enough.

So, as far as the evidence-based culture is concerned, well, the answer there is in curricular design and training of surgeons. And, in fact, Peter McCulloch, who is a surgeon from Oxford, he and his surgeons are already going through that transformation into an evidence-based cohort.

The question of compatible IT systems is more difficult to overcome, but the future is already here. Kaiser Permanente, for example, does have one system which links the hospital and the community. So does the United States military, and so by having a licensing agreement whereby people have to be transparent about where what data is in their systems, this possibility could be overcome. And there are many countries, I think Sweden particularly, other Nordic countries, who have got exemplary examples of such record linkage, and indeed Scotland has. I think Aberdeen has a very good record linkage system. Am I right about that?

(No response.)

DR. SEDRAKYAN: Not Aberdeen but Dundee. I get confused when I go north of the border.

Then the next problem was the accuracy and completeness of data. One solution there would be to identify when there was a really important question, perhaps questions identified by patients themselves and particularly important to them, and put some money into making sure that the data was adequately collected for those just as he would do for a standing registry, for example.

The question of too many signals, I got quite a lot of discussion going as to how we might deal with that. And there are various methods to deal with it statistically, one of which is to use these databases or at least half a database as a hypothesis-generating database and the other half as a hypothesis-testing database, and there are statistical adjustments and you can prioritize certain questions you identified in advance and so on. And quite a lot of experience has been gained on this whole subject from people who do all these proteomics and genomic databases who have dealt for many years with this problem of far too many potential signals. So we don't think those are not overcome-able.

And last was the question of ethics, and I think this obviously complicated the question which has occupied many minds a great deal of time, that many record linkage systems have been set up around the world, and as long as you follow an appropriate methodology to engage patients from the start, very much along the -- lines you were suggesting earlier on, Susanne, when we didn't -- we thought that the majority of patients would give consent for their follow-up, and the proportion that did would be rather small.

So those were our thoughts on that.

DR. GRAVES: Our group covered many of the same issues, but what we talked about is really a system of postmarket surveillance saying there had to be a variety of things available to be used with postmarket surveillance system. One of the things that we talked about that it was quite critical to keep an adverse events reporting system, that we felt that it was very important to collect the information in a prospective manner from the point of view of what was actually happening with devices and whether you use registries or whether you have the perfect system of the electronic records and whether that's accessible. I guess a lot of -- there was a lot of discussion about that and the practicality of doing that in the near future.

That the -- and the other thing that was felt to be very important is that when you're collecting information, that you really for different devices, you must have a device-specific dataset that you're interested in and that the outcomes for that device need to be defined. And there will be questions that will need to be answered, and so you'd still need the option of saying that you want clinical trials to be undertaken as part of the postmarket surveillance system to answer very specific questions where you don't actually have information, and everyone felt that that was a very important thing.

And finally that as part of the perfect system that you would need -- that the information that was coming out of that system to be publicly available, and that should be publicly available to all stakeholders, which includes patients and their significant others, relatives and so on, clinicians, regulatory bodies and obviously industry. And what information is provided, there was some discussion about this, and it was felt important that there should be information in a variety of different areas including potential harm of a device, the potential benefits of a device, and also the performance of a device which does not necessarily mean harm or benefit, but you know at a certain period of time a certain number will do this, you know. You may expect a lead to break in 20 years. You know, that sort of performance information that -- and people felt that that was very important so that patients became very important stakeholders and that patients also became important contributors to be providing information, particularly through systems like the adverse events reporting is really making it accessible for patients to be able to report when there are problems.

I guess with respect to the barriers to change and how to overcome those with respect to adverse event reporting, one of the things that we talked about was that the various stakeholders actually aren't very good at adverse event reporting and whether or not that that needed to be made mandatory, and it defined what adverse events were. And that was how do I overcome that particular issue.

With the postmarket surveillance system, where clearly the barriers to implementing that particularly with registries is getting clinicians to provide data so that it -- it's very important to make it very easy for that information to be provided, and I think that if you could make it almost clinician-free, that that would be even better. And that -- but any system that you had had to be very user-friendly was basically a very strong point that people made.

And I think that that probably covers for most of the discussion that we've had. I don't know -- Kay, is there anything that --

DR. ERGINA: So I guess I'll go next. Much of the things that we talked about in terms of the perfect database system has pretty much been covered. We want it to be able to be affordable, interact with other components so that other databases or other collectors could communicate with it. We were interested in having it flexible so it could have alternate uses for the future, which we ran up a bit of issue about that because the automatic collection of data, as some of us well know, is not so automatic, and it's not so complete, and that one has to recognize the purpose of the database or registry that you're collecting. So those are the things we thought that were -- that I thought was important to mention other than what you had mentioned.

It was the same stakeholders that we thought that we had talked about here the physicians, the patients, the system itself, non-M.D.'s perhaps collecting the data. To take it out of the hands of the M.D.'s might facilitate getting over the barrier of creating the -- of collecting the data. So we had to incentivize the stakeholders to overcome the particular barriers, whether it's by assisting in the cost of industry with government and having everybody participate in the cost of the system, knowing that in the end the benefit was to everybody.

We had at the beginning a struggle about what the -- what we were talking about the purpose of collecting data was for because a lot of the discussions during the day has been around safety, talking about harms, and not so much about efficacy. So one of the issues is why are we collecting the database? If it's to do surveillance for harms, that's one thing. That's actually something that the methodologists in our group had a bit of difficulty with because that shouldn't be the primary purpose of studies in trials for particular -- either implants that were for therapeutic benefit. But we've acknowledged that that seemed to be an important purpose here and that these are the issues that we came up to be able to make it a useful, at least, first go at having a comprehensive system.

One of the things that we had talked about also was the ability of this collection system infrastructure to be used for perhaps effectiveness trials later because if you have the infrastructure in place, whether the data can be communicated with other databases. Oh, yeah. One of the things that we thought about it, there should be a repository, a central repository because of the difficulty in having a system interlocking with -- as in we were pointing out here that there are lot of issues interlocking with other systems, that perhaps it would be even better to have a central repository where all the data would end up being fed into. So those are the things that we thought about might be helpful.

But in the future as we have the infrastructure in place, then effectiveness and efficacy trials would be easier to do, what we'd start with the primary purpose seems to be at this meeting here is to look at the surveillance of perhaps harms and safety.

DR. GRAVES: If I could just add one thing that was mentioned, and I forgot to mention, was that our group felt that it was very important that these systems were really borderless, that nations actually cooperated, there were different jurisdictions, that there were devices -- there may be devices that were used in one country but not another, and before they come into another country, there is already information available. And so there should be ease of access in sharing this information between countries.

DR. McCULLOCH: Can I just ask Stephen about the comment about including randomized trials in the postmarketing surveillance because we kind of touched on that briefly in our group, and I made the comment that most of the time this was a situation where we were past the stage where, you know, they were going to be helpful. So I just wanted some examples of what you were thinking about.

DR. GRAVES: One example is one that I was involved in which is actually not so much -- well, it's partly a device and procedure. It's vertebroplasty, vertebroplasty which has been running for many years and that I was involved in designing a randomized control trial on vertebroplasty, and that was published in 2010 along with the Mayo Clinic's randomized control trial on vertebroplasty. And the upshot of that is that in many parts of the world, funding has been withdrawn for vertebroplasty. And so it's a technique that was well established, but there was never any evidence for it. And so when the evidence was gained late, after there had been a lot of patients done to show that it really did have no benefit at all that -- so I think that that's an example of a randomized control trial done late that has actually had a very big impact on healthcare system.

But you want to talk about the use of clinical trials as part of postmarket surveillance?

DR. LILFORD: I've got another nice example if you want thinking time.

DR. BRUCE CAMPBELL: No, no. No, no. Richard, you know I never think before I speak. Come on. (Laughs.) No, I mean I come back to a sort of point that I made perhaps repeatedly in our group, which is horses for courses, which is to say, you know, that for some simple things, all you actually need is adverse event reporting. For others you need a register, and for others you may have very specific questions that require a trial. Now, it may be that that for certain things is a randomized control trial because your specific question is this looks okay to approve, but is it actually in the long game better than what we have? And it's just one sort of mischievous observation I have about today, that we keep talking about the life cycle. But all the graphs only show you getting it into use. They don't show you it dying off because something better has come along.

DR. VANDENBROUCKE: Maybe I mean as kind of general comments that came back in several groups, this is the purpose of the postmarketing surveillance, and I have the impression also on hearing what FDA was telling that maybe there is more differentiation needed in the purpose. There is harms, meaning untoward events that happen to patients, which is different from performance like the device running out of its lifetime, and which is different from the question, which device is better, which in some systems, like maybe for the hips, you can really study which device is better so that you know which ones last at least ten years, for example.

But then there is the question about benefits that some of us talked about that personally I would have a big question mark because benefit means was it good for the patient to have had this device in comparison of not having had it? And that to me seems quite different, benefit effectiveness from harms, performance, and maybe some indication about which device performs better than the other.

DR. LILFORD: Well, I agree with that about 97 percent. But I mean the benefit is a complicated construct to which the other endpoints which you mentioned contribute. For example, that it has a short half-life or has a high revision rate or whatever, that adds to the benefits. But of course, there's some benefits that you might want to capture in another way such as quality of life or patient-reported outcomes or whatever.

DR. BARKUN: The trouble with this whole subject is that there's no -- it's very hard to come up with any hard and fast rule. You know, if randomized trials are for effectiveness, postmarketing surveillance is for safety because while that might generally be the case, and it's quite a nice way of thinking about it, it's never -- it never falls out as neatly as that. One nice example I use of randomized trials giving evidence of about an adverse event is use of tumor necrosis factor in arthritis. And when we put all the randomized trials of TNF together, anti-TNF together, we found an increase in cancer rates.

DR. SEDRAKYAN: Any more questions from people who didn't have a chance to talk today, comments?

(No response.)

DR. SEDRAKYAN: So we can finalize the public part of our think tank meeting, which was today, we'll have taxicabs around 6:10, between 6 to 6:10, Danica?

DR. RITCHEY: 6:30.

DR. MARINAC-DABIC: Well, we changed it back. Sorry. Because it looked like we were going to finish (off microphone) --

DR. SEDRAKYAN: So we wanted to thank you again for this lively discussion and contribution to the meeting and --

UNIDENTIFIED SPEAKER: Remember to read --

DR. SEDRAKYAN: Read --

Yes, yes. And we particularly wanted to thank our support staff at FDA for organizing the meeting here in-house and want to ask you to read these cases that we'll be distributing for tomorrow's discussion. So we have three cases, and we have description of these cases.

You're going to distribute them?

DR. RITCHEY: They've been distributed. We talked about it at the beginning of this session. These are the three case studies for tomorrow morning, and so they should be part of your packet now. They were distributed a little while ago.

DR. SEDRAKYAN: Okay. So I know it's probably hard after the dinner for you to spend time reading them, but if you can make an effort we'll appreciate it.

(Whereupon, at 6:00 p.m., the meeting was adjourned.)

C E R T I F I C A T E

This is to certify that the attached proceedings in the matter of:

BRIDGING THE IDEAL AND TPLC APPROACHES FOR EVIDENCE DEVELOPMENT FOR SURGICAL MEDICAL DEVICES AND PROCEDURES

December 2, 2011

Silver Spring, Maryland

were held as herein appears, and that this is the original transcription thereof for the files of the Food and Drug Administration, Center for Devices and Radiological Health.

____________________________

CATHY BELKA

Official Reporter