• Decrease font size
  • Return font size to normal
  • Increase font size
U.S. Department of Health and Human Services

Safety

  • Print
  • Share
  • E-mail

March 8, 2007 Sentinel Network Public Meeting Transcript

FOOD AND DRUG ADMINISTRATION
SENTINEL NETWORK PUBLIC MEETING

March 8, 2007

University System of Maryland 
Shady Grove Center

8630 Gudelsky Drive 
Rockville, Maryland

Proceedings by:
CASET Associates, Ltd.
10201 Lee Highway 
Fairfax, Virginia  22030 
(703)352-0091

[PDF Version]

TABLE OF CONTENTS

Moderated Discussion Between Federal Government and Invited Speaker Panels on What Was Heard On the First Day

Moderated Discussion of Opportunities for Collaboration    



P R O C E E D I N G S              (8:15 a.m.)

     MR. SHUREN:  Good morning.  Welcome.  Let's go ahead and get started.  I know folks are still trickling in, but I think we have critical mass, so we can get going.

Welcome again to part two of the sentinel network public meeting.  As I had mentioned, we are going to have to different format this morning, as you can already see.  The layout of the room is different.  What we have done is taken our federal government panelists from the other day, we have interspersed them amongst the invited speaker panelists here.   Our goal is to now focus on the concrete steps we should take together to start assembling the network. 

     We are going to begin this morning to take a little bit of time just to get some reflections on the discussions we had yesterday, if folks have additional questions or thoughts regarding that.  Then we will focus a lot more on concrete steps. 

     I know on the agenda that you have seen we have laid out very specific times for the rest of the folks here in the audience to have an opportunity to weigh in as well.  Since I chair this meeting, I am going to assert executive authority and try to make that a little more flexible.  So what I may do is, if there is a natural break in the conversation or I see a lot of people getting antsy in the audience, I may just stop our conversation here and allow for an open mike session for a little bit of time.

     You do not need to register for it.  We have got mikes on two sides of the room.  Just simply come forward to the microphone, introduce yourself, say who you are, what is your affiliation, and then let us know your thoughts.  What I ask is to keep your comments brief so there is ample opportunity for others to weigh in.  But that said, you can have more than one bite of the apple, so don't feel that if you spoke once, you can't come up again.  You can come up as many times as time will permit.

     I will ask the panelists here -- we are going to go around and introduce ourselves.  I will also ask that at least the first time that you speak, introduce yourself again, folks in the audience.  But that we keep a flow, I won't ask that you introduce yourself every single time you have a comment, or our transcript will probably be 50 percent introductions and 50 percent something else.

     We are going to break for lunch around 12 o'clock.  We are going to be a little more generous today and allow for an hour and a half.  That's it. 

     Catherine Lorraine with the FDA is going to be our moderator today, so we will let the conversation move along, but she is going to jump in from time to time, maybe steer the direction in a different course, depending if there are other issues we need to capture.

     What we are going to try to do as well is, if there are some key points or some points where there is some uniform support for, we are going to put that on a laptop here and show it up on a screen behind us, an opportunity if folks in the audience hear something and say I would like to weigh in on that point as well, we will have it up on the screen.

     So with that, why don't we go ahead and begin.  I am Jeff Shuren, Assistant Commissioner for Policy at FDA.

     MS. LORRAINE:  Catherine Lorraine in the Office of Policy at FDA.

     MS. JUNG:  Good morning.  Connie Jung, Office of Policy, FDA.

     MR. DAL PAN:  I'm Gerald Dal Pan.  I am the Director of the Office of Surveillance and Epidemiology at the Center for Drug Evaluation and Research at FDA.

     MS. SLUTSKY:  Hi, I'm Jean Slutsky, and I am not with FDA.  I direct the Center for Outcomes and Evidence at the Agency for Health Care Research and Quality.

     MS. TRONTELL:  I'm Anne Trontell with the Agency for Health Care Research and Quality, and program director for the CERTS program.

     MS. PAXTON:  Hi, I'm Liz Paxton, Director of Surgical Outcomes and Analysis for Kaiser Permanente.

     MR. VALENTINO:  Mike Valentino.  I am pharmacy director for the Department of Veterans Affairs.

     MS. CUNNINGHAM:  Fran Cunningham, Director of the Center for Medication Safety at the Department of Veterans Affairs.

     MS. RUDOLPH:  Barbara Rudolph, the Leapfrog Group, Director of Leaps and Measures.

     MR. BUDNITZ:  Dan Budnitz, from the Division of Health Care Quality Promotion at CDC.

     MR. PLATT:  Richard Platt from Harvard and the HMO Research network.

     MR. RESNIC:  Fred Resnic from Harvard, Brigham Women's Hospital, representing research programs in safety signal detection.

     MR. OVERHAGE:  Marc Overhage, Regenstrief Institute and the Indiana Health Information Exchange.

     MR. MANDL:  Ken Mandl, Children's Hospital, Boston, Harvard Medical School and the MIT Center for Biomedical Innovation.

     MR. DATENA:  Mike Datena, Department of Defense, the electronic health record.

     MR. MC GINNIS:  Tom McGinnis, Chief of Pharmaceutical Operations at the Department of Defense.

     MR. CHUTE:  Chris Chute, Professor and Chair of Biomedical Informatics at Mayo Clinic.

     MR. HILL:  Jeffrey Hill with the American Medical Group Association.  I am CEO of the ENSETA Collaborative Data Warehouse.

     MR. BRAUN:  Miles Braun.  I am Director of the Division of Epidemiology at the Center for Biologics at FDA.

     MR. GROSS:  Tom Gross, Director of the Division of Postmarket Surveillance at the Center for Devices and Radiological Health at the FDA.

      Agenda Item:  Moderated Discussion Between Federal Government and Invited Speaker Panels on What Was Heard the First Day

     MS. LORRAINE:  Good morning, everyone.  I would like to start by asking the members of the audience if you are able to clearly hear all of the speakers.  This is a large room.  Yes, the microphones are working well. 

     I will just ask all of the speakers, I know we don't have microphones for every single person, but when you are speaking, if you could speak directly into the microphone so everyone can hear the important things that all of us are going to be saying this morning.

     I would like to begin by asking if anyone would like to get started with the first part of our discussion, which is going to be some reflections on the information that we heard yesterday.  Is anyone interested in getting us started?

     Yes, please.

     MR. CHUTE:  I was impressed at the dichotomy presented in many of the presentations between an active and a tacit surveillance strategy.  Clearly if the sentinel network evolves along one path, the infrastructures, the informatics, the collaborations, the communities would be entirely different than if it evolves along the other path.

     By that, I mean if one relies on the engagement of clinicians to cognitively recognize the sentinel event and then subsequently report it, as opposed to the more passive approach, where that information would be harvested through NHIN infrastructures, through other data feeds and integrations.

     It raises the question of whether analogous to biosurveillance in public health this should become a component of that on the AHIC agenda, or whether it should be an independent agenda, regardless.  If it is a tacit surveillance drawing information from the clinical community through a variety of mechanisms, it also presents a profoundly different analytical challenge.

     My bias is overwhelmingly on the side of a passive surveillance environment.  It is implausible to me that a clinician fraught with the vicissitudes of practice is likely to effectively recognize sentinel events when they occur or, if they do, to go through the extra requirement of an active surveillance engagement.

     Furthermore, it is obvious that adverse events are going to occur on a continuum of severity effectively, and more thoughtful and useful understanding of outcomes and adverse events would derive from a passive system where that continuum in fact is captured.

     I'll stop.

     MR. SHUREN:  Let me circle back on that, too.  The terms passive surveillance and active surveillance, particularly active surveillance, may mean different things to different people.  Maybe we can take a minute to see for purposes of discussion what folks actually mean when they say active surveillance versus passive surveillance.  Fill out a form; is active surveillance requiring an explicit act to register an event.  Passive surveillance is the analysis of aggregated data or group data drawn algorhythmically from information sources.

     MR. DAL PAN:  I guess the active and the passive really depends on the perspective you are taking.  Something that is active for one person is passive for another, and vice versa.  We call our current spontaneous reporting system a passive reporting system from our point of view at FDA, because we are not going out looking for this information.  We are relying on clinicians, pharmacists, et cetera, to take some action, what you say, fill out a form and send it to us.  So we call that system a passive surveillance system, but that is from our point of view.  We are essentially waiting for those to come in.  We also call this a spontaneous system, because they are not required to fill out that form.  They choose to do so, to tell us.  So that is how we come to call our system a passive spontaneous system.

     So we look at an active -- in our frame of reference we look at an active system as one that at the point of care action doesn't have to be taken.  Rather, we can go out and look at these larger data sets and actively try to find something ourselves, rather than rely on waiting for a clinician to fill out a form to send it to us or to some other system.

     So I think at a minimum, active and passive really depends on the point of view you are talking.  Perhaps in that sense they are not very good terms.  I clearly understand what you are saying, but they are a point of view term. 

     MS. LORRAINE:  Ken, I see you nodding your head.

     MR. MANDL:  I agree.  I think what we should probably do is just avoid the terms, because they are used differently by different people.  The perspectives are different and they are defined different ways.

     I think what Chris was getting at was something requiring manual data entry and active use of the system manually by physicians, versus something more along the lines of a data process system that might tend towards automation.

     MR. RESNIC:  I think it is important to recognize that these are very complementary processes, though I absolutely agree, the focus of this effort ought to be the data processing and surveillance.  But one should not dismiss the benefits of the voluntary reporting for the protection of unexpected associations and for possibly the detection of events that one would not have predicted.

     When we have a data processing based system, we are going to be required to anticipate the types of events or put in some sorts of logic to cull the data in ways to detect abnormal patterns, yet relying on the population of physicians and practitioners and providers to somehow also give us a head start, and insight is important.

     So we shouldn't abandon by any means the efforts of the voluntary provider based reporting systems, though I completely agree, the efforts and discussions should be focused on the data processing, data accumulation, data detection, single detection.

     MS. TRONTELL:  I do agree on the complementary nature of the two data systems.  Maybe I might propose some language.  Maybe clinician centered surveillance would be a term that would be meaningful to both of us.  Then you would have more of the data driven surveillance so that we are all talking about the same things at the same time.

     MR. MANDL:  I'll just add that we also I think have the opportunity to actually have patients participate, too.  So we might even want a more broad term than clinician centered.

     MS. TRONTELL:  Individual reporting. 

     MR. HILL:  What I am hearing is, perhaps we need a tiered system, where in any case we start off with perhaps an unsuspected event from a pattern of activity we get from data processing that then becomes a suspected event based on that data, which should be fed back to the physicians and the providers so they can look more clearly once it becomes suspected. 

     Therefore, that communication route that you have identified in your proposal is important, not only for tracking but for disseminating the suspicion of an adverse event.  It could be something that the FDA might have suspected from the clinical trial of a product that it wants to look for in the real world, or it can be something totally unsuspected.

     I think in terms of a sentinel, as my mother sits up in her room and watches the neighborhood, it is those things that are not expected that you want the sentinel to see, but then find a way to disseminate that and confirm that.

     So I think we really need a multiple system.  You are trying to improve the tracking as well as the identification.

     MR. GROSS:  From a device perspective, I think either path is okay as long as the system is capable of capturing the suspected and the unexpected. 

     We talked about terminologies yesterday.  In the device sector, part of the issue is not only what happens to patients, but what happens to the device.  Whether you take the passive or active route, as long as the system has the capability of capturing that sort of data, I think it would work well in a tiered fashion than otherwise. 

     MS. CUNNINGHAM:  I concur with a lot that has been said earlier.  We have to operationalize this quite often in our system, because we have to go directly back to the patient from our patient safety center. 

     We do use the spontaneous reporting system which we consider passive surveillance in our system.  We use that in tandem with what we do when we are investigating things using an integrated database.

     When we have something that is known or highly suspected, then we use our integrated databases to confirm it, and then roll out as far as communication is concerned.  If it is something that is unsuspected or something that we cannot confirm quite easily, there is a lot more effort that goes into that before one can start sending that information back out to the patient population or even to your physician population. 

     I think that is something that needs to be addressed; how do we begin to develop communication with the signal detections that ultimately will be occurring, or with something that is suspected but not necessarily known or mildly suspected.  I think as we begin to think of the tiered approach, to think about how to directly communicate it to the practitioner, and then ultimately to the patient if you need to act on it relatively quickly.

     MS. PAXTON:  I agree completely with the focus on the data processing, but really want to emphasize the importance of clinician reported data.

We have integrated documentation as well as data collection at point of care.  It has been very effective for us in determining issues that we need to focus on.  So I want to emphasize the importance of that.

     MR. PLATT:  I will weigh on also in favor of the large upside opportunity in making good use of data that are routinely collected. 

     But I also want to mention a hybrid model that uses data to elicit surveillance.  As part of our work with the vaccine safety data link, we have been using a system that looks at diagnoses and procedures and other information in the electronic medical record to prompt clinicians to consider an adverse vaccine reaction. 

     This elicited program uses a white list.  There are a lot of diagnoses that we assume are never adverse events, and if it is not on the white list, the clinician gets a popup question on the EMR that says patient had this vaccine ten days ago, you just entered a diagnosis of something, do you think it might be an adverse event.  If the answer is yes, then the question is, would you like to submit an adverse event report.  Then if the answer to that is yes, a pre-populated report comes up and the clinician can complete the free text part, or not.  But then the clinician is done.  The rest of the reporting is handled on the clinician's behalf.

     So that kind of hybrid might take best advantage of these automated systems and getting clinician input to help inform understanding of what that event means.

     MS. RUDOLPH:  Just taking it a step further beyond that identification, it seemed like yesterday from all the different perspectives that were presented, there are lots of different components to this sentinel network.  Some of those are research activities, some of those are standards that, while there might be existing languages and other kinds of things, there will still need to be standards work done in order to enable connectivity across those entities.  Also, on the reporting side, there are multiple ways to report this kind of information.

     So it seemed to me that I didn't see any one proposal that covered all of this well.  So in putting this together, I think there would need to be a variety of approaches and entities engaged in the activity, as opposed to selecting one of the presentations or proposals yesterday.  It didn't seem like any of them covered everything to the extent that they needed to. 

     MR. HILL:  I think yesterday we heard the three main components being surveillance, assessment and then communication.  In this hybrid model, if through pattern recognition or culling through large amounts of data we identify an issue, then it must be assessed.

     As the woman from the pharmaceutical industry mentioned yesterday, there could be other confounding variables; was it related to a drug, was it related to a medical product, or not taking that product appropriately, or other comorbidities or gaps in the treatment.  That assessment component is essential.

     So I think if we do on one side of the hybrid concentrate on looking at data and then we see a suspicious set of circumstances, we need the tools and the ability to get back to those physicians caring for those very patients.  We have to deal with the privacy issues on that, of course, but there are forms of networks that that can be done.  So maybe it is slinging back and forth in the hybrid.

     When you go to a physician, it is because you suspect something.  You are not asking them to watch the world they live in, but then they have a motive to be involved in assessing that on behalf of their patient, let alone the population.

     MR. OVERHAGE:  This is taking a little bit different direction, but one of the things that was brought to the surface for me yesterday in the discussion is, there are many different -- when we talk about adverse drug events, I think of clinical trials, the kind of things that are comprehensive.  We are looking for everything because we don't know what is important.

     Now, obviously when you are doing a trial, that is important.  We need to get a sense of nuisance adverse events that might change patient compliance, or might make the drug not -- people wouldn't prescribe it because it causes side effects that would be undesirable for the patient or whatever, and perhaps from a safety standpoint as well.

     I don't have a clear picture in my mind of this, and probably other folks around the table do, but there is this other category of things that we are interested in, the torsades and the liver failure and death, and maybe delayed life or premature death that we are interested in finding.  Those are in a different bucket in some ways, and require different kinds of data and different kinds of detection.

     So I guess where I am headed is, in my mind I am wrestling with this question.  There are these groups of things that we really want to know.  We want to put it in the labeling.  We want clinicians and patients to be aware of it.  We may even want to monitor and intervene in health care settings, because it changes our patients' lives.  It may be important for a payor from an economic perspective, if people are going to have side effects and those sorts of things.

     These other serious events are in almost a different bucket.  They may be much rarer.  There are things that we may dramatically shift the risk-benefit analysis that we do.  But there are also things that by and large are going to show up in -- when we talk about patient driven reporting, if somebody gets liver failure, they are often going to think a little bit.  It is going to show up in a claims database as a trace somehow of those kind of events.

     Maybe I am way off base on this.  Do others think about it that way?

     MR. DAL PAN:  We have thought about this along those lines.  I agree with what some of the other people here have said, the need for both kinds of systems, the spontaneous reporting system that we currently have, plus these more automated databases and things.

     The way we have thought of looking into these databases could be along different lines.  One of them would be say a drug based surveillance system in these databases, where you say this new drug has come out, there is some stuff I don't know about it, so we will do surveillance in these databases to look at this drug.

     But the other is what Dr. Overhage said.  We are always interested in events like acute liver failure, torsade, you can make a long list of these, so that we can do an event based surveillance as well.  So you could use these systems in different ways, depending on what you are interested in.  I can imagine you could use them in both ways.

     MS. CUNNINGHAM:  I would like to state that it is not exactly the same thing, but the way we do things inside of our system. 

     We do have agents that of course cause an ADE.  They are known to cause it, either via a high dose of the agent or if you dose the agent with a patient who has some particular end organ damage.  So those tend to be things that we monitor, we consider to be quote-unquote low-hanging fruit.  They affect a very large volume of our patient population. 

     So you have a drug that could potentially cause a huge ADE in a patient who has renal insufficiency, so we track that drug, we track our patient population that has renal insufficiency, we identify it.  At times we are unhappy because we look at the large volume of patients we have identified, and we then have to go in and we have to intervene.  We intervene at the physician level and also at the patient level, and then we go back and monitor.

     That is important.  That is as important to us as detecting the unknown events that happen in very few patients.  So I think things have to exist on both levels, where you are monitoring from a very simple drug surveillance, I guess that is what you are saying, but we also need to look at newer things and things that we do not necessarily suspect, or things that we highly suspect, where we need to use more aggressive and intense data analysis to detect and act on.

     MS. TRONTELL:  I agree.  We have a tension here between relatively common adverse events that have influences on quality, cost, effectiveness, patient compliance with medications, and the tension with these potentially devastating rare adverse events.

     In thinking of active surveillance, probably already well known to many of you, as the prevalence of what you are looking for decreases, your predictive value, however your sensitivity and specificity are set, will go down.  So you are entering much more of a problematic area of false signals that might take a lot of resources if you are looking with a focus on those rare events.

     MR. PLATT:  That's right.  On the other hand, having lived through the Vioxx problem, where we had a common very serious problem, it seems to me that the sentinel network we would like to build needs to be robust to find common problems which we managed to overlook, just because they were so common.

     So in the automated data driven system that we would construct, I think we would want it to be capable of at least having the potential for finding the rare and the common adverse events.  The public health impact of the common events obviously is much greater, so I think Fran was pointing that out.  There is probably a large category of common events that are potentiation events, attributes of agents that perform in ways that we didn't expect because they are used in certain kinds of people who are receiving those kinds of drugs.  I think we are going to need large systems to understand where those occur.

     MR. CHUTE:  I have the bad luck to be trained as among other things an epidemiologist.  The consequence of that is that although I do informatics these days, it was drilled into me that the plural of anecdote is not data.

     If I look at the misclassification steps that are obvious in a clinician or patient centered reporting environment, the cognitive functioning that is required to trigger that kind of thinking, frankly the judgment and knowledge base is of most people making those conclusions who are effectively telling anecdotes or drawing hypotheses.  That is not in a rigorous epidemiologic perspective of the world a basis for inference.

     The question is, if you have a data driven environment, Dr. Woodcock posed a question yesterday, who would make those determinations, and is there precedent for adjudicating questions or potential false flags and the like. 

     I think the precedent for that kind of data analysis and inferencing derives from the tradition of patient safety monitoring boards, which are well understood and operate very effectively and make decisions that have significant financial consequences and societal consequences, in terms of risk-benefit, efficacy.

     So I think there is a sociologic precedent for drawing those kinds of conclusions dispassionately and usefully in society's interest.  But I am not persuaded that the mechanism of having ad hoc reports drawing from non-systematic information sources is adding benefit. 

     MR. SHUREN:  Just a followup question.  When you said the patient safety monitoring board, would you see something akin to it?  I have a clinical trial, I may have my data safety and monitoring board.  Here we would be looking at sentinel and whether there would be a concomitant patient safety monitoring board.

     MR. CHUTE:  Exactly. 

     MS. SLUTSKY:  Chris has an interesting concept.  As you can imagine, we have talked about this in various different forums, particularly with registries.  You are collecting all this data, but who is actually looking at it outside of a specific study?

     Observational studies haven't in general used the DSMV model.  Are you thinking of modeling it very similar to a DSMV or another formulation?

     MR. CHUTE:  Since I messed up the meaning of active and passive, you can tell I'm not real familiar with this community.  I had not thought deeply about it.  I do not presume to say that is the exact model.  But clearly I am raising the question. 

     If a data driven surveillance network emerges, it is clearly required as a concomitant activity to have some sort of oversight data monitoring representation, selected and managed with rotating membership perhaps, chosen with credentialed oversight, to evaluate the inevitable emergency of potentially false and potentially true signals.

     MR. BRAUN:  It seems what is being added around here are requirements for the system, and one of the options would be to go signal detection, and the other would be -- just to be simplistic, would be signal testing or hypothesis testing, and they are not obviously 100 percent discrete.

     In our FDA systems, we don't say passive or active or anything.  Our FDA traditional systems, talking to Tom about the Center for Drugs, there are almost a million reports a year of adverse events at the FDA.  So whether the quality of the data are up to the standards we would desire, it is at least numerically robust.  The people who take the trouble to send in those reports, to them those are signals.  They wouldn't have taken the trouble to fill out a form which they are rarely if ever paid to do.

     So we have a large number of those reports, and we have developed some quote-unquote data mining approaches to try to make sense out of them beyond individual report review.  There are obviously flaws and limitations to the system, but I think given its longevity and the amount of effort that has gone into it, it is fairly well developed, in terms of a science form, art form, it is pretty far along.  I don't know how much farther we could go with that.

     But on the other hand, in terms of hypothesis testing -- and one could consider each one of those reports as a signal at some level, if someone took the trouble.  How do we follow up on those?  It is almost humanly and systemically impossible to investigate all of those, so there needs to be some kind of triage.  But even with a substantial reduction of that massive influx of reports to be further followed up, we don't have the systems to do that.

     So my support would be to have a very large system that could be able to test hypotheses that would be population based.  When you say large, in my personal experience, when you start asking specific questions of the data, what seemed like a very large data set with several years or maybe more years, when you start really honing in on what you are interested in, it gets smaller and smaller and smaller.  The next thing you know, you have power problems to find relative risks of two or three, which are relative risks for Vioxx.  That is the range that was seen there.  That was a common exposure relatively to a common disease.  So you can imagine, when you get into rarer exposures and rarer diseases.

     So I think that is what I see the task is.  I think in our country, because we don't have a national health care system, we are a little behind the eight ball compared to some other places that are equally -- on an equal economic level to us.  I think we need to play catchup.  When we go do international meetings as people involved in drug safety and epidemiology, we are somewhat awed by some of the systems that have been set up in European countries.

     I think it is incumbent upon us to overcome the systemic obstacles that we have because of our health care system to try to piece together as best we can a system that approaches the level of what other countries have been able to do.

     So that is where I would put my vote, in favor of a large robust system that would be used for hypothesis testing with a massive number of signals that we already have, and that could provide reliable epidemiologic data for safety. 

     MR. RESNIC:  To take this discussion maybe even to a higher level, more abstract level, would it be helpful to prioritize our efforts to describe what the ideal system's major components would be, which I think were articulated well through a patchwork framework in yesterday's discussions and through the public notice.  Then to say what our priorities will be, what the resource constraints are existing in the near term, and what needs to be demonstrated to possibly change priorities of either governmental or industrial partnerships to move things along.

     It seems as though there is consensus that there is a complementary benefit of having the active and passive, regardless of how you classify which one is active and passive, but traditionally the currently passive system and what we are all talking about, the data driven active system, that there is bidirectional communication between the two systems ultimately.  So signals detected in one system perhaps prompt query to the provider and patient community in the other, in the passive system, to see whether there is more there than had been detected.

     Although we have seen repeatedly that the passive current system is subject to reporting behavior anomalies based on notoriety of events in the press.  Likewise, events that are detected, unexpected events in the currently passive system could generate hypotheses to be tested in the active system, either based on diagnoses, true sentinel events, or based on drug product or medical product.

     Within what seemed to be the mandate for discussion in these two days, I think we are talking primarily about the data driven system, and what are the major components that we heard discussed yesterday.  There is the data.  There were some very traditional deep clinical data sources, there are deep administrative data sources, there are novel data sources.  We heard from the pharmacy community, the provider community, from a direct outreach to providers through an Internet source directly to the patient. 

     Perhaps we need to refine what we were talking about.  Are we talking about pilot projects within the existing data sources, the rich deep clinical data sources of the VA, DoD, other large clinical providers?  Where is the data?  What are we talking about?  The next layer of detection.

     Are we going to focus on relatively uniform methods for detection?  Obviously one of the messages yesterday was transparency, consistency, so there is some sustainability and there is not a one-off analysis every time we need to look at something,  there is some method to our madness of approaching these data sources that have yet to be identified.  How would we approach the expectation for detection?

     I think what was mentioned just moments ago is this notion of confirmation of any signals that are detected, the process requiring some human oversight, patient data safety monitoring, root cause analysis, some prioritization of what signals which we know there will be false positive as prevalence goes down, how do we approach that.

     Finally, the last piece is communication, communication within the communities monitoring bodies, so that there is a communication to existing systems, the VA system, DoD, others, international organizations, regulatory organizations, and then to providers, industry and patients.

     I think each one of these very complex pieces, we have to start settling on them.  Across all of them will be the issue of prioritization.  Do we focus in the short term on high impact, low frequency events that has been the driving force for what Dr. Mandell pointed out yesterday is the crisis for reaction model for how we have responded as a nation to medical product safety issues.

     So I hope this isn't a rambling, but I am just trying to focus our attention to the various components of the data driven system and the deep work that is needed to even identify pilot systems through the system.

     MR. CHUTE:  I concur entirely.  Moving the focus to what is the data raises a question that came up yesterday as to what the vocabulary and coding source information should subscribe to.

     Let me give some credentials on this particular issue.  I am chair of the steering committee for the ICD-11 revision at WHO.  I chair a number of terminology and ontology standards groups, both at ISO and HL7 and sundry other standards communities.

     The question is, does MedRA serve as a pragmatic focus for adverse event ontology representation.  It is a very sophisticated question and a very complex one, and we could spend the rest of the day on it. 

     Simplistically, two facts emerge.  One, the development, maintenance and editing of MedRA is totally disconnected from the clinical community.  Given that virtually all adverse events touch against the clinical community at some point, that is unfortunate on the face of it.  The relative merits of that particular ontology could be examined.  Simplistically it is warmed-over ICD and has a limited level of granularity.

     The second point is the intellectual property issues associated with ontology.  If the requirement for a sentinel surveillance system is to have access to coding systems that can capture and use that information, I would suggest that the right to use that terminology should be in the public's interest for the country.

     I find it bizarre that the adverse event reporting system is a pay per view requirement.  If you look at what are the alternatives in terms of both clinical granularity, in terms of linkage and integration with other clinical systems, and I might add from a federal perspective, what the United States government has invested in, in terms of support, maintenance, infrastructure and the U.S. site licensing, there are alternatives.  SNOMED comes to mind.

     The issue of the data will eventually center around how is that data aggregated and coordinated.  I think that question bears very, very serious examination, because I submit to you, the current FDA sanctioned methods, and I understand this is with international charter through ICH and it is a global question, I know that wearing my ISO hat, is not necessarily in the public's interest the way it is structured today.

     MR. PLATT:  All well said.  I am thinking of a different dimension on which to add to the discussion.  In the priority setting, to ask where are the opportunities to make big events quickly, understanding that they may not be the ones we want to stick with for the long term. 

     But if we say that one of those opportunities is large data systems that are created in the course of the regular delivery of care, then it seems to me inescapable we are going to be using the coding systems that those systems use.

     So it seems to me we are talking about parallel activities.  There is the long term, how should we think about the world, and what kinds of better systems can we develop. 

     Then there is the short term.  How could we take advantage of resources that currently exist that could be brought much more actively into play to support FDA's and CDC's mandates to address public safety in a more active way?

     MS. RUDOLPH:  I'd like to agree with you on that.  I think there is a way to do it.  The Public Health Data Standards Consortium, which is part of NCHS, has been working for the last four or five years at least on putting together an implementation guide for all of the standard billed transactions and so forth to allow public health and the terminology of public health and the uses for public health to use that transaction data, and how has a fully approved implementation guide, just as the other peers and  purchasers and so forth have.

     So I think there is a way to do that that would create an implementation guide for sentinel events.  It is very doable.  It has been done.  So I think you can take some of the existing data structures and just do some standards work through HL7 and X12 and so forth and get the data elements that you need and the definitions that you need.

     MR. PLATT:  Picking up on Marc Overhage's point that there are certain kinds of safety problems that recur, we have worked with FDA in asking how do you use existing claims data to identify rabdomialysis.  It takes work, but it is work that only has to be done once to ask, among the several dozen ICD-9 codes that might be used for rabdomialysis, which ones have the greatest predictive value.  It would not be enormous work to go through the most important adverse events that FDA cares about year in and year out, and develop ways to make better use of existing automated data systems. 

     It is only a piece of what we need to do, but it is very tractable.  In a period of a few months you would be in a much better place to use very large data resources.

     MR. BRAUN:  I was going to ask Dr. Platt about -- I know the HMO research network and vaccine safety data link.  Yesterday you were talking about 100 million people.  The questions have come up about terminologies and other specifics.  In your view, is the current system that you have scalable to the next level using the same basic agreements and terminologies that you are currently using?

     MR. PLATT:  Well, we are talking about ICD-9 and CPT really as the basis for it, and those are used very, very widely.  So yes, I would say until we have something better -- and I think that is going to depend on much broader penetration of electronic medical records before we are ready to talk about something better, to do very large population work.

     I think ICD-9 and CPT are the coin of the realm.  So we ought to figure out how to get as much juice out of them as we can with a moderate amount of squeezing.

     MS. CUNNINGHAM:  I would have to agree with what Rich said and using the ICD-9 and CPT codes.  That is what we have right now, so that is what we should use.

     I think you also need to have a good handle on how information is coded in your systems.  We have certain codes that should be coded a certain way.  We look at how our practitioners are using another code more commonly.  That needs to be considered as well, so we should begin to do that.

     There are a lot of things that you need to take into account, because you may potentially miss patients.  The only way you can do that is to know your system and know how your practitioners are --

     MR. PLATT:  But no matter how well you understand that, it points to the imperative of being able to do medical record review for those two cases.  To take advantage of these population based systems, you can easily survey the experience of millions and millions of people, but then you have to do the hand work on the hundreds of people who come to your attention that way.

     MS. CUNNINGHAM:  That is absolutely it.  If you don't have a way to go in and validate or verify, then you always run into problems.  What you are seeing is not really what is happening. I think that is the biggest mistake that can be made, is not knowing and validating a lot of this information.  As we begin to do this on a larger scale there is going to have to be some method put into place that allows for validation and verification.

     MR. PLATT:  I'll shut up after this.  It is part of what we see as the elicited surveillance notion; you get real time confirmation from the clinician that that is really the condition, and you can collect additional information that is often missing from the report.

     When you go back to the report, because you are trying to verify that the patient had chicken pox even though the patient had been immunized, and all you get in the medical record is chicken pox, that doesn't help you very much.  So elicited surveillance in my view is a way of insuring that the information that you want is in the medical record when it would otherwise be unavailable.

     MR. MANDL:  I think this conversation is moving in a good direction.  One thing that would be useful to hear from the FDA now or in the future is what are we trying to detect, where are the thresholds of interest here with a system like this.

  

I think everyone is in agreement that something like a COX-2 causing a large population burden of myocardial infarction would be one thing; would hip fractures in patients taking H-2 blockers be something that is an FDA interest?  Of course it is an interest, but is that part of the thrust or is that part of an epidemiological association left out of this part of the discussion and handled somewhere else?  In other words, is there a set of priorities that we should be designing towards.

     MR. SHUREN:  Before folks from FDA jump it, let me broaden that too.  Since the effort is for the other folks from the government too, I would be interested to hear as well from VA and DoD and CDC as to what their interests are as well.

     MR. GROSS:  Again from a device perspective, there are safety issues that are brought to life in our passive system, the MDR system.  But more often than not, I can't turn to a population based database to address those issues.  One of the major reasons is, they don't have a unique device identifier, so that is an issue that is in the works and that is very important.  Obviously if you can't identify the product, you're stuck.

     But we heard yesterday from Kaiser Permanente about their orthopedic registry.  Some of the outcomes of interest with devices are fairly simple.  In other words, how does the product perform in the real world, are there premature failures, are the revision rates early on in the product's performance postmarket what you would expect.

     I would argue that those sort of questions are not signal detection and they are not hypothesis testing.  We have limited data premarket to assess the safety and effectiveness of this product.  It is let out on the market based on limited clinical data, then the real world takes over.

     So it is really a product performance issue.  I would argue that for many devices, the outcomes of interest could be fairly simple.  I think these systems could capture that.  Number one, the large systems that Rich is talking about I think can help map the real world experience early on in a product's postmarket performance.  I think also, when safety issues arise say through our passive system, this would be an ideal place to address those sorts of issues in a hypothesis testing way, as long as we can identify either the device type or perhaps better yet, at a manufacturer's specific level.  We are not there yet, but there are systems that are capable of doing that in certain product areas.

     So it is a complementary mix.  I don't think the MDR system is going away, nor would I advocate it go away for certainly in the next few years, because again, for us on the device side it can provide those signals.  But I need to turn elsewhere to get more refined data.

     MS. PAXTON:  I would just like to comment on using administrative data sources for identifying complications following total joint procedures.  We found that although the ICD-9 codes and CPT codes provide an opportunity to identify potential complications, sensitivity rate is very low, 52 percent we have found, in validating complications within our total joint registry.  So that validation piece is critical in moving forward.

     MR. DAL PAN:  From the point of view of drugs, there are a few things we would want to use this type of system for.  One of them would be to augment what our current adverse event reporting system is good for, which is those events that are typically drug related events, the agranular cytosis, the unexplained acute liver failure, things like that.

     Ideally, we could find these events earlier than we currently do, or get a larger case base of them.  But we are also interested in a system that can signal detection for the kinds of adverse events that aren't typically drug related events, hip fractures, for example would be something like that.  Then as Dr. Gross said, another part of the system could be used to confirm these signals as well.

     I want to echo the issue of validation as well, in terms of the events that are common in the population.  But we need to have some sort of system.  I'd like to see some of my CDER colleagues jump in on this issue.  David, do you want to say something? 

     MR. GRAHAM:  David Graham, CDER.  It seems to me there is a fundamental question which people talk about.  We call it sentinel surveillance, and CDC people could talk to us about what are sentinel surveillance systems.  What I learned when I was in the school of public health about sentinel surveillance is very different than some of the things we are talking about here.

     So that doesn't mean that maybe what we are talking about here is off target.  Maybe it means that we need to come to a common agreement about what it is we expect out of a sentinel surveillance system.  We can think about looking at things based on drugs.  I can think about looking at what are the common things we are concerned about, but then again, what are things that, if they bite us in the tail, as happened with cardiovascular events and COX-2 inhibitors, where we end up with tens or hundreds of thousands of people who were affected by it?  That is the type of thing we want to be sure to try to capture, because it is happening right before our eyes and we are not even aware of it.  I think some surveillance system should be able to get at that.

     You brought up hip fractures.  We don't traditionally think of hip fractures as being an adverse drug reaction.  It turns out that they very well may be.  If I am taking a proton pump inhibitor and somehow that is inhibiting calcium absorption and ten years on the drug I have hip fractures, that might be a very important thing to know.

     So to quote Rumsfeld, the things we know we know and the things we don't know we don't know.  Surveillance systems in some regard should have the capacity to detect the unanticipated as well.

     There are a host of problems.  Surveillance is one thing. We traditionally think about surveillance as identifying a problem.  Then we want to confirm is that problem real or is it Memorex.  We seem to be talking about confusing those two aspects.  Maybe that is part of the same system. 

     I found very intriguing the idea of having a system that cross communicates with different aspects of the surveillance system that could then elicit an additional way of surveillance that might refine and focus the question and what are our uncertainties about it, and then the confirmation phase.

     So I think those are all aspects of the problem that various members of the panel have touched upon.  But maybe if you are going to be thinking about a cohesive system, I would be very intrigued to know what people think sentinel surveillance means, and then thinking shorter term, and shorter term might be five or ten years, and then longer term which might be longer.

     I think what Rich said before is very true.  No matter what system gets designed or contemplated today, there is a reality that we are facing; what do FDA, what do other public health agencies do tomorrow or next year or the year after that, because whatever we talk about today isn't going to be in place for some period in the future.

     MR. BRAUN:  I would give the biologics viewpoint, but before that to say that I think setting up a large population based system, there are algorithms and statistical approaches that I think are ripe for implementation to screen for adverse event product associations.  So there could be a signal generation, a signal detection module that would be built into a large system that would be ideally suited for hypothesis testing.  But it could be also robust for generating them.

     With respect to biologics, I think we have many and diverse products.  One of the most important is vaccinations.  Vaccinations are given routinely to every age group, but I think a key one is infants, healthy infants.  The reason that we are comfortable injecting vaccines into healthy infants is because they have such a safe risk profile.

     Now, the tolerance for rare adverse events in that setting is very low, and adverse events that occur on the order of one in 10,000, one in 100,000 may tip the balance for at least certain members of our population as to whether they want to immunize their babies.

     So it is incumbent upon us to be able to test the safety of these vaccines that are currently being used at a very precise and also reliable level with good ascertainment.  So for that reason, we are very much in favor of having large robust data systems.

     We heard an example yesterday about a one in 100,000 type of event for an adolescent vaccine, for a disease that is very serious, but is not a common disease.  So that was a good example, and there are others that I won't go into that are current.  So I think it is really a need that we feel acutely.

     Thanks. 

     MR. CALIFF:  At the risk of repeating everything that was said yesterday, I apologize for not being here yesterday, but I was dealing with drug eluting stents all day yesterday with some of your colleagues and wrestling with it.

     Just a couple of points.  One thing that was obvious yesterday is that the FDA needs help to develop an informatics capability that I don't think it currently has to be an integrator of all these different layers that I am hearing described.  I agree, this is going to have to be a layered sort of a thing, but I don't think the FDA right now is in the best shape in terms of its information system and informatics capabilities to take advantage of what is out there.  There needs to be a strategy for that, whether it is total internal capability or some sort of informatics network.

     As I understand it, there is not a full time CIO right now at the FDA.  If I have got that wrong, it can be corrected, but if I have got it right, it probably ought to be --

     PARTICIPANT:  We do.  We do have a fulltime CIO.  He just started Monday.  Thank God we held this public meeting later in the week.

     MR. CALIFF:  That's a good start.  The second point I would make is, like a lot of things, it is hard to describe until you see it work.  I found it frankly pretty embarrassing for the U.S. yesterday that Sweden has a system for coronary stents where every stent that is put in is registered on every patient against a stent followup, and there is no hassling about ICD-9 codes and all that; they knew exactly how many people had heart attacks and strokes and who was admitted to the hospital, and it was a beautiful exposition. 

     These rare weird things, as everybody has said, so you need a system to attack those, but the real issue that is killing and disabling our citizens is the complex interaction of drugs, devices and in practitioners.     The stent case is probably a good one, because all three are very involved in the etiology of the problem that we have, and we are in no way prepared to deal with it.

     So I just want to put in a plug if it wasn't done adequately yesterday.  The professional societies and groups have to come to the table.  What happened in Sweden is that the cardiovascular practitioners designed the system to deal with stents, and the government supported it, and they both worked together to make it functional, so they are all participating.     Everything is not a specialty issue, I know, but many things are, particularly when it comes to devices that are put in by particular types of specialist.  I think the academic medical centers and the NIH have largely been silent in this regard. 

     I think there may have been some discussion of this yesterday, but the CTSA effort is putting $500 million a year at its peak into building infrastructure, including bioinformatics, that will be in place in every state, which could also play a major role in terms of layering of the medical knowledge that is needed to bridge this gap.

     So I guess I would summarize my feeling, having heard what I have heard so far, I agree completely with Rich.  Right now we have chaos, and the old saying, we have to make chicken salad out of chicken or whatever we have right now.  But in planning for the future, I am convinced the FDA has to be an informatics integrator, and it has got to be thought of that way and strategized that way, or we will still have completely disassociated people reporting their data.

     There was a feeling I had yesterday with my valiant colleagues at the FDA.  They are sitting there, having to wait on people to bring them information, which I don't think is a good way to do it.  I would like to see it more active.

     MR. OVERHAGE:  Two followups to that.  I think there are some really good observations.

     One is, I think we have to be thoughtful to avoid what I call the 600 gnats problem.  What I mean by that is, when you talk about informatics integrator and so on, I think we have to be thoughtful.  Everybody wants data from every health care provider, for quality improvement, for chronic disease management, for disease surveillance, for drugs, for devices.  There is this never-ending demand in a completely incoherent fashion. 

     So I think we have to be very careful and thoughtful if we are going to be successful about how we think about that data, and the work that Rich has done over a year or so, of building it onto existing flows and infrastructures is really important if we are going to be successful.  So when you think about being an integrator of information, I think we have to be very careful about that.

     The other thing that I think is a really important point, and I'm not sure how we wrestle with, is this issue about the complex interactions and the subsets.  This is the inverse perspective of personalized medicine.

     We spend a lot of time wrestling with the question of when we are going to choose a therapy for an individual patient, how do we take into account that individual patient's characteristic.  We are certainly not at the point of figuring out what their genome tells us yet.  Even things as simple as what level of cholesterol do I want to drive this patient's treatment to is a pretty complex decision.  Yet, we are trying to do surveillance and say are there more heart attacks across a population of 100 million people.  What about the 10,000 people that are at 100 percent risk of heart attacks that you don't necessarily see because they are submerged in the large population.    I think that is one of the challenges to claims based data, that you have enough -- I think Fran was alluding to this -- do you have enough comorbidities and those sorts of things to figure out who those 100,000 people who are at markedly increased risk might be, and then avoid giving them the drug or using the devices in that particular subpopulation.

     So I think this complex interaction that you describe between the patient and the drug and the device are a real challenge for the surveillance side of the world.

     MR. SHUREN:  I would just ask a followup question on that.  Would you envision for sentinel network at some point that there is also a loop, that as you identify a new adverse event, let's say, that is associated with a particular medical product, that there may be in some circumstances a case to feed it back into some other arm that then will look at the biological underpinnings or the genomic underpinnings for that adverse event?

     MR. OVERHAGE:  I would hope that happens.  I'm not sure that is the FDA's job or whose job it is, but it would be important to have it.

     MR. CALIFF:  That is what I meant by information integrator.  I agree with you; if the FDA is inventing informatics that is going to be a disaster, because everybody else is doing it at the same time.  But the FDA by law has access to information that other people can't get, which is critical to medical practice, it turns out.  

     I think the stent case is a good example.  In fact, what is happening there is that now that we know that drug eluting stents have a signal, there is no arguing about it.  The only argument is what is causing it.  Is it bad practitioners that are putting in stents that are not opposed correctly, or is it a fundamental defect and the healing of the endothelium, or is it not giving anti-platelet drugs at the right rate.  So all of those avenues are now very actively at play.

     In a way, it is a system that is working well, but it is working well because all the good information came from outside the U.S.

     MR. CHUTE:  I agree with the notion of leveraging the informatics community and other activities. 

     Let's talk data scale for at least a minute.  Whether we retreat from that in horror is the second question.  The improvements in computing capacity over the past 60 years have been widely cited.  If you add up data storage, dynamic memory, computing process and the order of ten to the 50.  That is an astronomical number.  Our ability to manage information in the early 21st century is ten to the 50 fold greater than it was circa World War II.  That is huge.

     If we are going to think then of an information intenser era and of a world class surveillance and sentinel network, what kind of data magnitudes are we talking about?  We run experiments, and I'm sure many of us here do, that generate over a terrabyte of data per experiment, huge quantities of data.  We do that routinely.  So to hear that we are dealing with a million instances over a year from where many of us sit is a trickle, a mere veneer of what is actually going on clinically.

     The question then is, is that FDA's problem?  Is that a national problem?  What role should FDA and the sentinel network play in the context of scalable national NHINS or related types of activities, CTSAs, other networks?

     It is abundantly clear FDA cannot and perhaps should not do this entirely by themselves.  To set up a freestanding FDA managed sentinel network is probably not consistent with 21st century information theory, data capacity and information collection activities.  That begs, what components of an emerging national infrastructure should be managed, should be directed, should be overseen by members of the community interested in drug and device safety patient monitoring?  That turns the question around, not, do you have a sentinel network or not, but how do you tap into the emerging national infrastructure that is being built at greater volumes that would dwarf the current level of thinking associated with patient safety.

     MR. SHUREN:  This might actually be a good time -- because I agree with everything you said -- to open it up for folks who are in the audience, if you have comments or inputs.  There are microphones on two sides of the room here.  If you want to say something, just step up, introduce yourself, give us your affiliation.

     MR. MORRIS:  I agree with Chris' comment.  One of the things that we are trying to balance here, and Gerald faces this, David faces this, you begin to see things in the error system, you see things that have been coming up in the spontaneous reporting, and the question is, do you have confidence that that is occurring.    So I'm hearing that a part of this is, let's be able to ask questions of larger data sources or discrete data sources, whether it goes to reconstruction or whether it goes to Brigham and Women's or Kaiser, Mayo, wherever it goes.  You want to know, does it occur in a population, a different kind of population.

     I can tell you right now, if you go into claims data, if you are going to SNOMED CPT based EMR data, if you are going to the VA-DoD data, you are going to get different answers.  So now it comes back and says, yes, there is a range of potential risk or a range of associations, and now what do you do? 

     The next step I heard is, you want to go back and ask more questions.  You want to have a hypothesis you can go test back in that data.  Now you are in the back and forth of not just merely, does it exist, but let me go ask questions of it.  It puts a layer of complexity in terms of how the network has to operate.

     So I think the first step is, help us, give us more information.  Let me look at multiple different data sources.  But if I don't see it in claims, I do see it at Mayo.  I'm not sure if I see it at the VA.  It may occur at Kaiser, and Regenstrief has got a different population, and Marshfield comes up and gives you a different answer, then you have got to come back and say, what is real, how do I interpret that.  But don't minimize then the level of having to go back asking additional questions, because it is going to come back to confidence, can you stand up and feel confident and say that this is an association or this is real.     There is a level of complexity there in terms of the informatics, the rules, how the data is structured, that are going to have to be part of the network. 

     MS. SACHS:  I am Susan Sachs.  I am at Roche, but I speak for myself, not necessarily for the company.  I don't know how many bites of the apple I am going to get, so I'm going to make my comments fast.

     First of all, there is a list called the designated medical event list.  I thought it is the FDA's list.  That is the list with rabnomialysis, agranular cytosis, all those things.  I call them the killer list because they will kill your drug.  They are very, very important to be monitoring all the time.

     Another comment about standards.  We have to deal with MedRA because MedRA is required not just in the U.S.  So it might not be perfect, but we report our side effects in MedRA.

     We also have to deal with SNOMED which is part of the electronic medical record, and ICD codes which are in databases.  When you try to match those three, you run into problems, especially trying to do studies in databases using signals from MedRA with ICD codes.

     I am going to make a plea that for whatever network you decide, please consider the issue of safety reporting and what will be the requirements.  Even looking at the CIONS report, it is not clear what safety reporting is required out of observational studies, whether if you see something it gets reported to a PSUR at the end or it should be an expedited report.  Is it duplicate reporting because the physician has already reported it, and now you are talking about networks with lots of people looking at the data?  Who has to report the event?

     Finally, I agree with Miles.  These databases are incredibly important for risk assessment.  If you use them just for signal detection, and I guarantee you, lots of people in this room have different definitions for what is a signal, but if you use them just for signal detection, where do we go to assess those risks?

     MR. CECERE:  Fred Cecere, Chief Medical Officer of Nobless, formerly Miterjet.  This is a great discussion.  You all are going at the problem systematically.  But I would let you know that we have been looking at some problems with mining both structured and unstructured data in the medical record, and I think we need a lot more progress in mining doctors' notes and nurses' notes and other unstructured elements within the data fields that contain a lot of information that is critical when you are doing any sort of surveillance work that is not in the diagnosis.  People don't always put in the diagnosis those things which are clearly important when one is looking for a small or significant adverse event that they may not be very happy about.  So I think we do have to get much better at mining unstructured data.

     I also think there is a place, although I don't know where it is, in this surveillance world for patient diaries.  Applying some base theory, I think if we know a lot about a few, and a little about a lot of people, you can start making some assumptions you can't make just by knowing a little bit about a whole lot of folks.

     I think it is important that someone study this accurately and find out if we could create patient diaries around those critical drugs and devices which are most likely to be problematic, where someone is recording everything about their life and the way the nurse study was done in other areas.  You might be able to apply some information analyses that you can't do if you are just randomly grabbing pieces of information for a large number of people.

     So if that is of any help in moving the dialogue; thank you.

     MR. SHUREN:  If you are going to respond, --

     MR. MC DONALD:  I wasn't going to respond specifically, but I have accumulated some things that I want to not burst with, and say. 

     There are two or three things that I think we have to be careful of.  Firstly is that when one takes a whole bunch of databases you are clearly going to get different answers because they are not population based.  So I think we really have to be conscious of population based things for any long term outcomes.  The HMOs have 20 percent turnover in some of them.  Medicaid comes in and out.  Medicare is wonderful, because once you are over 65, you never get younger or something like that.

     So that is number one, just be conscious if it is a long term outcome like Vioxx, you will get better answers.  You need to have some way to get a population base.

   

The second thing is, how much of this is really doable in the final answer.  We built many systems in the U.S., big expensive systems.  FBI had one, VA in Florida had one, $250 million, $750 million, air traffic control, and they never worked.  So just some caution in reaching too far.  We ought to have a starting point we would like to make sure we can do.  We can find the next Vioxx earlier, maybe something even more tricky than that.

     We have got a lot of computing power, and it is very seductive, this computing power, but it doesn't mean we can do everything.  There is this chaos theory that says even deterministic equations can't be done because there is so much infinite precision in the starting points.  Weather forecasting hasn't got a whole lot better.  Maybe three days to five days we have done in the same 50 years. 

     So there is a logarithmic challenge here.  We have to be careful when you start to get the dimensionality that we are going to have, how the hell do you analyze it and get real answers.

     Then the final thing is, let me emphasize, don't go alone, because it is going to be divisive and work and cost.  Medicare and whatever it evolves into, which I would expect in the next ten years it is going to be more, has certainly got to be a good ally if you can -- I'm not speaking for NIH, by the way, I'm speaking for myself, I should be careful -- but they have a problem of cost.  It is the inverse of looking at this data; what good is this, how much good is this thing. 

     So they are duals of each other, and we oughtn't forget them, because cost is what is going to kill us all.  We are up to 20 percent of the Gross National Product and major companies look like they are going to be out of business in three years.  So that may dominate everything.

     So we may save the one out of one million bad event and let 10,000 babies die because they are not getting well baby care.  So being conscious of the extent.  Being absolutely perfect in this area may actually not be the final best answer for all of health.

     MR. MATTES:  I must say, I was really struck when the whole discussion that was started with Chris Chute, looking at bifurcating the two approaches between passive and active.  I think that kind of analysis to what we have out there right now is useful. 

     I would suggest it be extended to the point of examining the different systems and collection methods.  But collection points I would think should be considered in terms of, do you collect information at the level of the patient, as we saw with the I-Guard system?  Do you collect information at the level of the practitioner?  Do you collect information at the level of the pharmacist?  These I think will give you different types of information. 

     The second point I would like to make is that I am also hearing different kinds of discussions about what we want to get out of this, much as Steven Covey is sometimes rightly maligned.  I would think you would want to lay out what you want to end up with.  Begin with the end in mind, is the line.  What are some of the prime goals of this system?  Is it acute liver failure, is it Vioxx?  And ask, can you model detection of that with the systems that you have, that are out there as potentials.

     MR. KRALL:  Ron Krall, Chief Medical Officer at Glaxo SmithKline.  A number of things I want to comment on.

     The first one goes right to the question you just ended with, which is what do we want to know from such a system, what do we want to learn.  From my perspective as a chief medical officer, follow Sutton's law, to the period of vulnerability from the time the drug has been studied in clinical trials and gets approved, until the time we know it very well in practice.

     That period of vulnerability leads you to look for certain specific things, for example, classic drug related events, agranular cytosis, those kinds of event, hepatic failure, those kinds of events that we know occur rarely, but have been the reason to take drugs off the marketplace in the past.

     The second kind of events are those that we suspect from this drug because of what we know about it or members of its class.  Rabdomialysis might be a good example of that.  Third is -- and this is reaching a little further -- events of the kind that we think would have a big public health impact, so myocardial events, bone fractures would be examples of those kinds of events.

     We also want to be able to find things that we don't suspect at all.  So we do want to be able to do signal detection.  That is what I want to be able to do for the drugs at GSK, to be able to look for those kinds of events in the marketplace at the time when we are making the drug available to patients.

     We would also like to be able to use this kind of a system for hypothesis driven studies.  We would like to be able to confirm signals that we see, whether they are from other clinical trials or from the spontaneous adverse event reporting system.  We would like to use this as a hypothesis driven method or tool to confirm signals that we see elsewhere.

     We would also like to be able to study what someone in the room called product performance; do we get the expected benefit of this medicine, the benefit that we expected and projected we might get from the clinical research experience that led to its approval.

     Al Amenius made a presentation yesterday showing you a little bit of the work that we have been doing at GSK.  We think it is possible to create the kind of large database system that would allow us to answer all of these questions.  We are traveling down a road within a couple of years of being able to do this for all of the medicines we launch at GSK. 

     But honestly, we don't believe that that is the right way to do this.  We believe that it should be done in partnership, as a public-private partnership, that we ought to be pooling resources.  We ought to be developing best practice methodology for the detection of these kinds of signals, for the validation of these kinds of signals, and doing it in a partnership that has lots of transparency, so that we can develop trust of the public in the kind of surveillance that we are carrying out for our medicines.

     MS. WEST:  Sue West, University of North Carolina-Chapel Hill.  I wanted to segue at this point to talk a little bit about what Hugh Tilson mentioned yesterday, which is training in pharmaco epidemiology and pharmaco vigilance.

     I think one of the major issues that we are going to face, even if we do go forward with some sentinel network, is that we have the hands and the minds of the people to do this in this country.  So I think that we need to consider the academic sector when we are putting forward some new ideas.

     I would like to give an example.  UNC has been very fortunate and worked hard to do this to try and obtain training funds for developing the field of pharmaco epidemiology.  We have been very fortunate to get unrestricted educational funds from GSK.  We have also obtained funds from Amgen and Merck, and we are always continuing to put the hat out for additional training funds.

     At the current time, we have about 12 to 15 Ph.D pharmaco epidemiology students training at UNC.  One of the things that we train them is not only on methods where students are very strong methodologically, but they are also working with these large claims databases, the electronic health record.  So we are training them in these methods, so that they will be a sufficient work force for the future.  But we only have 15 of them at the current time.

     The other thing that our students are learning is genetic epidemiology.  That is going to be very important for the pharmacogenetics of the future.

     So I would like to put in a plug for additional training in this area, whether it comes from the private sector, whether it comes from FDA.  But we have to make sure that we have the work force that can move forward with these great ideas that we are proposing today. 

     MR. JAIN:  Good morning.  This is Shell Jain with ACS.  That is Affiliated Computer Services, although for today's meeting the American Cancer Society would probably be more appropriate.

     I wanted the sentinel network to contemplate another network, a network that exists today and is a benefit of the United States health care system.  We are transaction oriented.  By transaction oriented, I mean there are interfaces that exist today, live with patients and clinicians, at point of care -- everything from claims being filed electronically at point of service, either at a pharmacy, where the patient is in front of a pharmacist, e-prescribing, where when the physician types in a scrip, software could prompt that physician to query that patient while the patient is in front of them.  E-mail exchange with patients insures they are paying physicians to conduct e-mail exchanges with patients, live.  Or even DoD talked about a telepharmacy model across its full tricare population.

     So there is an infrastructure, technology and movement and growth in that area that would seem to me to take advantage of, in terms of wide query between a patient and a trained clinician.  To take the issue of the training, there are people already existing.  So it layers onto an existing platform of what the FDA is already hoping to look for, which is collecting live patient-clinician information at a point of care.

     That network is in parallel to this main focus of the conversation, which is a data mining exercise of large population based, claims based, EHR based systems.  I think that network needs to be tapped into and perhaps even a starting point to jumpstart what the FDA's mission is today, and perhaps some of the other department agencies. 

     MS. LORRAINE:  Thank you very much.  I think at this point it might be a good idea for us to take a short break.  So let's take a 15-minute break and come back at 10:05.

     (Brief recess.)

 Agenda Item:  Moderated Discussion on Opportunities for Collaboration

     MS. LORRAINE:  Thank you all.  I think we had a very good discussion this morning, and I hope everyone is warmed up for the next part of our discussion together.  Now is the time when we need to become a lot more specific about how we are going to put this whole thing together.  We have been talking at a fairly general level, and I think it is critical for us to get much more specific and much more concrete about how we are going to assemble something that can serve all the disparate needs that have been identified this morning.

     So as everyone contributes, I would like you all to be as specific as you possibly can, so that we can have at the end of the day a real sense of what our next steps are and where we are headed together with this effort.

     Fred this morning began talking about the data.  I am going to turn to you a little bit, because I know you and a number of others are going to have to leave us before the end of the day.  So we want your contribution before you have to depart. 

     We have had a discussion that ranged very widely around data, is it an active system, is it a passive system, will we leave that terminology behind.  We have identified that there are a number of different purposes that we need the data for.  I would like to have everyone's sense of whether we are going to be able to use the same infrastructure, whatever it is that we have that exists now, can we use it to answer all of these different questions.

     Fred, I'd like to start with you.

     MR. RESNIC:  To try to stay focused just on data, my first comment is, I am a little bit concerned that we don't lose sight of the goal of ultimately using point of care collected, somehow reasonably validated clinical data, though I know that that is not immediately available in the broadest sense right now.  I think we have been talking about the availability of very, very large claims based data sets, but I am concerned repeatedly that investigations based solely on claims based data sets leads to the need to drill down, build up a clinical report for all the cases that you are investigating, to then do posthoc risk adjustment. 

I think ultimately your target ought to be a sentinel network based on data gathering from operation of clinical systems that are somehow integrated with one another.

     In terms of what data Massachusetts could provide, we mentioned yesterday that Massachusetts has a clinical outcomes registry that is restricted currently to cardiac procedures.  I have spoken to Dr. Sharlise Norman, who is the operations physical manager for that data set, and she would be very eager to contribute what could be contributed from that data set, recognizing its limited scope but relatively high quality nature.

     I think we have to think creatively though about perhaps a road map for the goals for the data sets and the integration of the data sets. I think the first milestone perhaps is the use of large claims based data sets.  Maybe that is a two-year goal, but at five years the sentinel network ought to have a benchmark for success of having X number of lives in certain populations, from live prospective clinical data repositories. 

     I think from my perspective, if you don't have that as your goal, we will never get there.  That is ultimately the way for us to partially approach the success of other large health care systems such as Sweden with the integrated health care system regarding cardiovascular devices, which was a crisis reaction for medical devices.

     The other point, being an interventional cardiologist, focusing primarily on medical devices, I think that we do have a huge gap in the claims based data systems, in which I don't believe we can track the precise device.  There is no good device classification system.  So absent better classification of the devices, we are not addressing the entirety of the medical product safety surveillance goal.

     So those are my initial thoughts. 

     MR. PLATT:  I think you have no choice but to think of separate systems that will have connections between them.  But to try to conceive of a single melded system I think is too big a leap and will force us to forego some very straightforward achievable goals.

     I think in these bins.  One is claims.  I think of ways to upgrade claims, so come the day that there is a unique identifier, it would make enormous sense to start negotiating with payors about requiring that that unique identifier be included as part of a claim.  That will take a long time to happen, so you might as well -- once you are confident that once you are confident that there is going to be a unique identifier, I would start that now.

     I think about electronic medical records as a second.  I would think about personal health records that Ken Mandl works on as a third area of resource.  Then I would think about registries and clinical data repositories as a separate area.  They often contribute in different ways, but they ought to be taken advantage of separately.

     It will be important to build a connection between them.  I think the best registries are likely in this day and age to be built on top of claims data systems.  So it seems to me that it would make sense to have the data that is collected as part of a cardiac catheterization be clearly tied to claims data because it will allow a lot of the longitudinal followup that would otherwise be very hard to accomplish.

     One of the real value-adds that I think the sentinel network could provide is starting to articulate the ways in which these initially separate systems could begin to talk to each other.

     I'll put in one more plug for saying it is critical for the federal agencies to start the conversation about making clear that health data ought to be part of evidence generation.  When each of us gets medical care, we should understand that that information should be available for understanding how that care works.

     That means, for instance, that when CMS data become available for doing postmarketing safety studies, that hospitals and clinicians would make medical records available for appropriate followup.

     MR. GROSS:  Just following up on what Fred and Rich had to say, for the interim there are models where device specific information is captured, typically in a registry environment.  We heard about MASDAC yesterday.  The American College of Cardiology is capturing similar sorts of information.  Currently there is an effort with CMS to look at ICDs.

     Then linking that registry data via patient identifiers to claims data, so that is the link.  Yes, the goal in the long run, hopefully three years rather than ten years, is to data a standard for unique device identification that will be incorporated into health records, generally speaking.

     So again, I agree that steps can be taken now and should be.  We have heard about the orthopedic registry.  There are other device specific attempts like that where you collect device specific information up front that can be quote-unquote easily linked via patient identifiers to these other claims databases.

     MR. CALIFF:  Rich and I usually agree, but I never understand that we agree until we talk things out a little bit. 

     Rich, you are not implying that we should continue this sort of fiefdom approach to the problem, where we have little tribal warlords that own their little repositories and have to be somewhat tapped through a contract to offer data.  It seems like the concept of meta data here at least ought to have some credibility. 

     I agree, a monolithic system would be disastrous.  I think everybody has said that.  But the question is is whether you can develop more patchwork that reaches across things like the ACC-NCDR registry, which have 80 percent of the coronary procedures done in the U.S., or the SDS registry in cardiac surgery, whether you can begin to put that together with other sources in a way that doesn't make them one entity, but as something more than separate, walled-off data sources.

     MR. PLATT:  So tribal warlords, bad.

     MR. CALIFF:  That is what we have now, right?

     MR. PLATT:  Yes, you're right, we usually agree when we talk about stuff enough.  I was trying to make sure we don't try to reach for something that may be beyond our grasp.  We have a lot of important components of a much more effective safety system than we are taking advantage of now.

     I am mostly asking that we be sure we understand what the basic tools are that already exist.  Part of what I was trying to say is that I want to make sure that when we are going to purpose build something, a registry, that we do it in a way that takes advantage of data we are already collecting.

     I was a little nervous, as I understood the way the implantable cardiac defibrillator registry was being created, that it was being created in a way that wasn't taking advantage of all the other health information that CMS was already collecting about people who were getting those implantable defibrillators.  It started with a blind slate.  Maybe it was just my ignorance, but it seemed to me that it was missing a big opportunity to add very important information that had to be collected as a registry.

     MR. CALIFF:  Can I say something just for a second?  There is an important issue here that we are working hard on, and people at FDA are, too.  I'll just call it empirical ethics.  Who can argue that we shouldn't be able to link up registries that are done in practice, like the defibrillator registry with the NCDR which has all the cardiac cath data. 

     Both are in the American College of Cardiology repository, which we and other academic centers house, but we are not allowed to put them together now because there is not a consent process to do that.  So working out how to make that happen in the current rules and regulations turns out to be a logistical problem that has an ethical context that we have to handle.

     MR. PLATT:  Whether it is ethics or a public education piece is not so clear to me.  But it is very clear to me that the job will be a couple of orders of magnitude harder than it needs to be, unless there is a real concerted effort to make clear to every clinician and every consumer of health care that evidence development is part of the social contract, and that we understand that there are appropriate uses of data that go beyond the individual patient-clinician interaction.

     I am pretty sure we are not there now.  Quite frankly, I'm not sure that government is the right part of our society to lead that conversation.  But I think government has to put real resources into facilitating a conversation across our society about being able to avoid your having to spend time thinking about how you link two databases that happen to be your possession.

     MS. LORRAINE:  Before I go to you, Barb, Kelly, I'm wondering from where you sit in the department and your activities, if you would like to comment on what we heard about evidence development being part of the social contract.

     MS. CRONIN:  Good question.  I think we do have a lot of public conversations going on right now around privacy, not specific to research, but in relationship to the Nationwide Health Information network.  I think secondary uses is something that we need to continue the conversation around, and we likely will over the next year.

     So I think that clarity not only as it relates to HIPAA, but as it relates to the general public's concerns would be quite helpful.  Whether or not the government should be convening those meetings is a good question, but certain it does relate to our overall agenda that needs to happen.

     MR. PLATT:  Just to give you an example that is going on right now, colleagues and I are working with CMS on ways in which CMS might make a big difference in health care associated infections, so a different domain from today's topic.

     It is a very robust collaboration, but we are having lots and lots of discussion about under what circumstances medical records can be made available to confirm or dismiss the existence of an infection in individuals who look from CMS claims data so they might have one.  A very, very difficult conversation.

     MS. CRONIN:  I think one point of clarification that is important from the FDA perspective is that the work that you would be interested in is public health surveillance, which is clearly a public health activity.  I think that AHRQ and other academic researchers more globally interested in evidence development falls more into the category of research.  That has a whole other set of legal requirements associated with it.

     So I think when we do have this conversation more broadly, we need to think about not only educating the public and making sure they have an informed opinion that is well expressed, but also make sure that when we are talking about public health functions that we talk about not only accessing data and using data for the good of public health, but that when a consumer decides to consent to participate in a network that is going to exchange their clinical data, they understand how it is being used and what is clearly a public health function that will protect them and their community.

     MR. CALIFF:  Kelly, are you sure that we are all clear on public health quality and research as a spectrum and where the dividing lines are between one and the other?  I'm not.  The Swedish database I referred to was not a database put together for epidemiology devices and drugs; it was put together to provide a quality system for the professional society and the hospitals in the country.

     MS. CRONIN:  Right.  I think there is a lot of utility for a lot of the data sources that are available for many different purposes.  I think FDA's hat will always be a public health hat, and they have that jurisdiction, and there isn't any confusion across jurisdictions on that.

     But I think when it comes to trying to figure out a conceptual framework for secondary uses of clinical data from registries, from various types of repositories or from local health information instead of just regional and national, we do need to think about individual scenarios of how that data is being used for what purposes, and relate that back to our currently legal framework, so that we are all very clear with not only is compliant with current law, but also how do we make sure that clinicians, consumers and public health partners, the research community, is acting appropriately and people are well informed about exactly how their data is being used. 

     MS. RUDOLPH:  To add on to the conversation, it would seem to me, if we really articulate the purpose of the network, which to me is clear public health, is there morbidity or mortality associated with the use of either these drugs or devices or biologics. 

     At that point in any state that I know of in this country, public health law has the right to request information from health care providers.  In most states there are mandates to do so for specific types of registries. 

     Right now, Congress has mandated that states, all states, develop an adverse events registry, and 28 states have already done so.  I think there are plenty of ways to get at the data, whether it is the claims data or the clinical data, under the public health flag.  Obviously you want to bring the provider community along with you; you don't want to hit them over the head with it.  But I think to not use that and to not use the available data that is fully population based state data systems seems to me a really big waste of the resources that we have.

     Many of those administrative data systems are being augmented with clinical data, not just Massachusetts, but Pennsylvania has significant clinical data, California has clinical data available within their administrative data and can be used for public health purposes.  Many other states are going in that direction.

     So I think it certainly is an important component of this.  It may not be the whole picture, but it is an important component. 

     DR. MC DONALD:  I just want to return us to a couple of really good points.  The first is this tension between privacy and utilization.  I think there is this tendency not to get to a tribal warlord contingent, it is me, each individual it serves and that's it, no one else gets to look at it.

     Some of the subtle things that are going on, there is not going to be a patient identifier in my lifetime.  We started and tried to do it in 1994.  Maybe you are younger and you will get one.  But there are ways to link it.

     What is happening behind the scenes, more and more inability to include those key variables that make it possible to link.  So I think we have to be very cautious about the forbidding and the use of the social security number in some records, because it is very, very hard to link accurately in many contexts without that.  Short periods of time with addresses and some other things.

     For the FDA's interest, the linking is part of the problem.  There are two parts to it.  One is, you are not allowed to look at my stuff, and you have got to go find me, and when I move five times and don't give permission and I don't answer the doorbell.   The other problem is, there may be gradually eating away at our ability to do linking through increasing laws and regulations about the personal identifying information.

     So that is something you have to do something about.  If you are not going to be able to link all this stuff -- Medicare number is a good linker, but that is just the over 65, and it is based on social security.

     MR. HILL:  We started off this section this afternoon with the general question, how do we go about assembling the sentinel network.  We have gone through a similar exercise with the AMBA members this last year, where they wanted to contribute and aggregate and share their data appropriately, all under their each control.  We went through the same kind of process.

     I don't think we have quite adequately answered the other ene of the question, what is it that we actually want to assemble.  That has already been mentioned a time or two.  What we did yesterday was spend a lot of time giving you ideas of what is out there and available to you, here is what you can pick and choose.  Now we are at a point where we need to decide what is it we want to accomplish and what is it we want to assemble, and then how we put a framework over that

     So I think we could talk all day about how are we going to assemble, but assemble what.  I don't think I quite heard and answer to do that yet.

     MR. CHUTE:  As people in the room may recall, I have been on the far end of the visionary wonderland, but I agree with Rich.  We need to think of an incremental strategy. 

     There are two traps we can fall into.  One is to define an opportunistic sentinel network that does take advantage of the available information, and think we are done.  That is the problem.     Clearly we can learn a great deal more from available information using either public health law or emerging NHIN environments or emerging claims data.  Of course, I am the biggest critic of using claims data for this kind of purpose, but it does have value, even though it doesn't get us all the way to where we want to go.  So the trap of thinking if we take advantage of this information and don't move beyond there is going to be a serious one.

     The other one is saying that as we evolve into a more cogent and comprehensive surveillance of potential adverse events in the context of devices or drugs, failing to leverage partnerships with other organizations -- it is an interesting question, whether the public health mechanism, which as you know, there is no federal mandate; it is a state mandate.  The poor CDC struggles with that every day.  Whereas, FDA does have a mandate that is not similarly restricted.  I'm not a lawyer, but I understand you are not required to operate through state agencies.

     So you are dealing with a discordance of federal legislation and mandates in a common cause, so beginning to work through from a societal benefits perspective, what is the optimal way to achieve this, and with whom should you partner and how should that arrangement be managed.

     Clearly ONC can be and should be a major coordinator -- that is what the C stands for -- of these kinds of activities.  But I do see it as evolutionary and incremental.  The sentinel system that you might devise two years from now should not be your final product.  It must continue to evolve and mature.

     MS. TRONTELL:  I have heard a number of suggestions.  I probably hear a theme arising that we may have some framework or backbone or core, or maybe the seed that makes the crystal start to come together in the administrative data systems.  They are admittedly with their limitations, but it may be the mechanism through linkages to which registries and other more rich data sources like electronic medical records might be attached.

     Your own comments would suggest that the computing ability for linkages exists.  Clearly there are other important issues for standardization and identifiers.

     I think the question is how do you get that seed into your crystal to have everything start to precipitate out.  Between CMS and the Department of Defense and the Veterans Affairs Administration, you have a pretty potent number of individuals that you might be able to start to assemble.  Even those data systems themselves vary in terms of the richness of the data attached to them.  I would like to invite some of those members of the panel to speak about -- I am naive, and you can't readily assemble them, or are there ways we could start to think of that as we get going.

     MS. CUNNINGHAM:  I can answer that.  I would have thought before a couple of weeks ago that there was a way to assemble it.  I think now I don't know.  I can't answer that.  There are certain

data security issues that are going on.

     I think what happens to us inside of our system is, a line is often drawn between what Kelly stated quite eloquently, what is public health and what is research.  When we are doing certain things from a medication safety standpoint, we do it all under the public health, because we are trying to evaluated safety of medications and evaluate usage of certain things in our system and events in our system so that we can reply to things relatively quickly as needed inside of a system.

     I think often the answers aren't always there that can be done quickly.  One needs to take it a step further.  One needs to do more aggressive analyses.  One needs help from outside systems to do that.  When that occurs, you need to go through a research process unfortunately inside of our system.  It would be great if we could find a way to merge that so that is not a barrier, so to speak.

     Now, with that in mind, I think we do still take a lot of things into consideration.  I think we adopt certain things relatively quickly as a health care system.  We end up sharing quite a bit of our information with the Department of Defense.  There are different levels or different people that we interact with.      

     At my level I was trying to develop a program with Dr. Trenka Coster.  We were introduced to each other by Paul Seligman, saying you're doing the same thing, can you guys get together.  It was a nice relationship that we are still trying to develop.  She has courses not here, so it slowed down.  But there are things that occur at different levels that we don't always have control over.

     I don't know if I gave you a very long answer or a short question, but certain things we can do and certain things we can't.

     MR. VALENTINO:  If I could just add a little bit.  Someone, I forget who it was, mentioned yesterday that the larger issue is the people issue, and getting people to want to collaborate together.  I think that our successes with collaboration have been because we found people that have wanted to work with us.

     We have run into some barriers quite frankly with CMS in terms of Part D data.  Congress directed us to do a person level match to see reliance on VA pharmacy versus Medicare Part D.  We are still working on a response to the letter, let alone trying to actually get this done.  So I think there may be some legislative changes that need to be made to facilitate these kinds of things.  Perhaps some legislative mandates need to be pursued.

     MR. DATENA:  From the perspective in the Department of Defense electronic medical record, I think on a daily basis we share more and more with the VA.  That is not to say that we haven't had our own issues.  We have some of the issues that you all are talking about, with what legally can we share.

     I think that some of those issues really need to be tackled so that we can share the data that we all know we need to share to come up with the answers that we need.  But there are still a lot of roadblocks even within the federal government to sharing this data.  I think we are getting better, but as Mike mentioned, there do need to be some legislative changes so that we can freely share that data. 

     MR. MC GINNIS:  On the pharmacy side, we are mostly claims data.  I am very interested in hearing how we can capture that data in the future so it might be more useful to something like this, which will increase the quality of the pharmacy benefit to our beneficiaries, give us information back quicker so we can get it out to our providers to try to prevent some of these preventable adverse events that could occur because we are not getting the information out right now in a timely fashion to those providers.

     MR. PLATT:  We haven't yet said today that effectiveness is inextricably linked to safety.  So I am putting it on the record.  A system that is going to do the most to insure the safety of the public with regard to therapeutics has to also engage in understanding effectiveness.

     So I am coming back to what is the public health mission.  It includes more than identifying adverse events and quantitating them.

     MR. BUDNITZ:  Since you put on the table what is the public health mission, from the CDC's perspective, the public health mission is quite broad.  While we certainly are interested in the say unknown unknowns of signal detection or funding new adverse effects of things that CDC has a traditional interest in like vaccines, we are also interested in known unknowns, like infectious diseases related to devices.

     Also, to this group of events are what we think of as the known knowns, the known adverse effects.  But we really do have very limited national data on what those are. 

     So as we look at effectiveness, we also need to look at what the burden of known adverse effects is as well.  This could be something this new system could do as well. 

     MR. MANDL:  I will just reiterate a few caveats that have come up throughout the last day and a half that I think we should continue to take into account.

     As we saw yesterday, there are going to be fiefdoms, and there are going to be ownership issues over data.  I think those are real and those will persist.  Data are perceived to have monetary value, value around competitive edge with health care systems, and we have to think about sharing data under circumstances where the realities are on the ground and people hold data closely.

     The privacy issues will arise.  As this comes onto the public radar, the privacy issues will expand and become more of an issue that needs to be confronted directly.  We are seeing that go on very actively right now at the HHS level.

     Another is that as Dr. McDonald pointed out, efforts that result in simply funding a large information system to be functionally spec'd out and produced have often failed.  He listed a number of governmental examples of systems where we go out and try to build a specific information system for a specific purpose, and end up with not much to show in the end.

     I think this issue of linking data across sites of care at the individual patient level is going to continue to be a persistent problem.  I don't see a rapid solution to that.  So whatever system we take into account that that is going to happen.

     I will put out a question.  It seems we are really talking about the space for postmarketing.  I guess the question is how seriously do we take phase four.  We want early approval.  We want to get drugs to market, and then we want to follow what is going on. 

     The work at Kaiser that you presented yesterday was very elegant, and was what the public thinks is already going on for everything we do.  That is what we would like to see happen; when we put a new device on the market we would like to be watching it very closely.

     So I think the question is, and this is a much bigger question to which I don't have any practical solution, what is the obligation of the health care system as they begin to use devices, what are the obligations of the manufacturers and pharma as they put new devices and drugs on the market, to follow these things very closely.

     The health system to date, the medication list for an individual patient, despite what we bill for the care we provide, has been elusive.  We provide a lot of case and a lot of procedures, but we don't provide an up-to-date medication list for our patients, despite the fact that it is the practice of medicine.

     So I think while I don't support a system that involves asking clinicians to enter more data, I do support a system that puts an imperative on health care institutions to provide the proper kind of data to feed into systems for quality, safety and for patient care.  I think we should think about how to express those obligations across the NHIN efforts.

     Lastly, I want to reiterate that I think that this is a use case of the NHIN.  I think we really should think about this in the context of leveraging the broad cross-HHS efforts that are going on in the NHIN, and think about this as part of that patchwork that we are trying to produce there.

     MS. SLUTSKY:  I just want to go back to something Rich said about effectiveness.  I think we need to be very careful not to drive a firewall between safety and effectiveness.  You can oftentimes define effectiveness as a balance of benefits and harms. By not keeping that connection close, I think we are confusing ourselves, or hiding our hats in the sand.

     One of the questions I wanted to put to the panel is the idea of roles and responsibilities, what is inherently governmental, how can government help, in what ways can government most effectively help set up these types of systems.  I don't know if we are talking about a distributed model or a federated model versus a large data aggregation effort; I have heard both things over the past two days, and the role of the private and semi-private sectors.

     MR. GROSS:  I'd like to follow up on what Ken was saying about phase four studies, we refer to them as post approval studies, in the device area.

     Yes, we do take them seriously.  As you all know, the responsibility for doing those studies resides with the manufacturer.  As you also realize, they can be very costly, so we try to balance the scientific questions with the feasibility of doing these studies with the cost of doing these sort of studies.  When you put that all in the mix, these studies tend to be small scale, because of everything I have said.

     Having said that, if this sentinel system was more mature in the way we have been talking about, larger scale, then manufacturers could go shopping and look at the various products that are in the marketplace.  That could be the mechanism for them to do larger scale post approval study at lower cost, because the infrastructure is already there.

     So it is an important concept.  I have even argued that if these systems were in place, manufacturers may not have to do post approval studies.  Maybe it resides more in the public sector.

     There is another model that is in place right now for ventricular assist devices.  NIH has funded to the tune of about $25 million to collect information on virtually every patient who gets a ventricular assist device to follow those patients in detail for two years, collecting not only the usual types of data, but also blood and tissue samples, and also to use that -- it is a registry based mechanism, and the data there approximate clinical trial quality data in terms of being adjudicated -- and to use that registry as a mechanism to provide data to the FDA on adverse event reports, device related or not.

     So it is small scale, in the sense that it focuses on ventricular assist devices, but that registry effort is also presenting an opportunity for manufacturers when they are required to do a phase four study to turn to that registry as a mechanism to do it, at a cost savings, because the infrastructure is there.

     MS. LORRAINE:  Thank you for that.  I would like to go back to the question that Jean just asked, and ask for some more responses from the rest of the panel, respective responsibilities and roles here. 

     Rich, I thought I saw an eyebrow.

     MR. PLATT:  It seems to me that government has an enormous role to play in articulating the kinds of needs that the society has.  I see these as societal needs, not the needs of industry or government or even public health.  I think public health is probably the best overall concept that knits these things together. 

     So it seems to me that only government can really say on behalf of society, we have to understand the balance of harms and benefits, and start to create the structures that would allow us to do that.

   

So I think the conversation over the last 15 minutes has made clear that there are some astonishing procedural roadblocks that I hear the smart people around the table not having a good idea of how we are going to solve any time soon.  It is a big role for government to take that on as its responsibility.

     I think it is going to have to be the public who takes on this issue of where is the boundary between research as it is usually conceived versus be a guinea pig, so you have to volunteer for it, and simply understanding whether medical care works as people assume that it does.  So that is something I think government can't lead, but it has to help support.

     Then I think government has to use its resources to best effect.  It is a problem when VA says we can't get a letter back from CMS.  That all is within government.  Some of what government does is so big as to be able to change the weather. 

     So I think it is going to be the private sector, since so much of our health care is delivered in the private sector, that is going to have to participate in a way that doesn't break the conventions.

     I think Ken is correct, that because the information that private groups hold has important value separate from its utility for safety and effectiveness, we are going to have to find a mechanism that allows those holders of the data to make it available for these purposes, but not for every other purpose.

     When Janet Woodcock said yesterday we need to figure out how people have access to the data, I'll bet there were 200 people in the room, I bet there were 200 different interpretations of what it means about who can have access to the data.  So that needs to be a real conversation.

     The last thing I would say with regard to Jean's comment is, for those reasons I can't imagine that we wouldn't choose to have a federated model for the systems that would make sense.  I think the centrifugal forces are way too great and the costs would be simply staggering to try to create a database.  It is not clear to me that we could even create the portal that connects to a bunch of federated databases in an omnibus way.  I think we might need a whole bunch of separate portals.

     MS. LORRAINE:  You have been articulating that one way or another during the whole meeting.  I am wondering whether anyone on this panel wants to challenge that idea.  Does anyone disagree?

     DR. MC DONALD:  This is not exactly a challenge, but I think the model of government as you describe it as this intelligent, omniscient being is wrong.  It is really just all the political forces averaged out in a particular period of time.

     I am now in the government, I am here to help you, so I should be careful what I say.  But what the government can do is articulate positions that then influences the population and influences the government.  It is a funny game we play.

     We have this position articulated.  It is partly because very intense people who want to spend a lot of time at it, doesn't have to be big money interests, can have huge effects, not always good on the average, an impression of what government has to do.

     We have this situation where a person will go and volunteer for a study, and get paid a thousand dollars, and be happy to do it, maybe a guinea pig, an actual guinea pig.  They get paid $50,000 by Medicare not to have to pay their bill, and we can't use an ounce of it for the good of society.

     So I think if someone articulates why should you be so damn selfish about your data, when you are getting it paid for by someone else most of the time.  You have said it well; this is to help us do better with you and your kids.  There is this intensity of me, me, me that just defeats that at the present time.

     So maybe the government could articulate your position and get people thinking a little more about how selfish that is in some contexts, defending those areas where it is proper that they be worried about data going out, so we take the good guy and the bad guy at the same time.

     MR. PLATT:  -- selfishness and say, I am outraged that after all this time, you can't tell me why one anti-hypertensive works better than another anti-hypertensive, and I am getting potluck.  You might be able to frame the conversation in a way that we would all agree that we had better know the answers to these things.

     MR. CHUTE:  I think some level of federation is inevitable, that is obvious.  Monoliths fail, that is clear.  The fundamental question is, what are the working pieces of that federation.

     It begs the research public health question quite squarely.  It is interesting that phase IV trials as they are called, or studies at least, don't pass the faith test for public health.  They are research almost by definition.  They are done by private entities, they are called trials and so on and so forth.  So all the issues of consenting, all the biases associated with that, and all the incomplete information gathering accrue to something done in the point of view as research.

     I come from the very old-fashioned school -- I was raised in New England -- that government is of the people, by the people and for the people.  I still believe that.  The role of government in terms of doing what is in society's interest, and that in this instance includes a coordination of federated information resources, each of which are managed as a public health resource, and held with the security and the confidence and the trust and the appropriate access rules that pass the societal sanity test.

     But it is clear that that becomes in its larger information structure a federated resource that begs the follow-on question of, who is the architect, where is the federal health architecture.  I know that word gets battered around a lot.  I think there actually is something called the federal health architecture.  But to what extent it looks at the specific issues of use case requirements for public health surveillance, be it drugs or bioterrorism, where are these things being adjudicated with respect to safety and efficacy that Rich brought up in terms of the flip side of all these things.

     That is where that dialogue needs to take place.  I submit the sentinel network is a piece in that space.  Heaven knows you can make elements of that complex piece work and add value in the near term.  But again, I would caution that that would not complete the process, because the larger question is what is this large information architecture space look like, and how can the use cases that we are talking about today be intelligently incorporated.

     MR. BRAUN:  In a sense this is an iterative process.  The architecture which is kind of cloudy right now, if it comes into focus with a champion, it will be an iterative process between those who will actually assemble the data and the government, because each of our groups have different missions.  We have devices here and biologics and AHRQ.  I think it will be important to align the product and the intended use with what our missions are.

     It does get back to where the data will reside and what the access will be.  I think that is going to relate to the value and also the cost.  We haven't talked about cost very much today, but these can be extensive undertakings.  Ideally there would be access.  There are obstacles as was pointed out to sharing data.  I think depending on -- as this comes into focus, how much it will deliver to the different government elements, it will help to build a support and impetus to overcome the important obstacles that are also coming into focus.

     Maybe in a way, the obstacles are coming into sharper focus than the fruits.  I think it will be important to try to sketch those out pretty soon to maintain good momentum. 

     MR. RESNIC:  I was considering the issue of clarifying the goals and then the resources that would be needed to build even the connections, enforce the connectivity, the definitional requirements to connect this federated system.

     I raise this perhaps controversial proposal, to go back to the industry, of which I know there are representations in the meeting here today, who are paying for these post approval studies, who are paying for phase IV studies, and to perhaps incrementally increase the fees and costs of those studies to help support a network that would ultimately reduce the need to do those studies over the long haul, using such resources to increase the momentum to expedite this process moving forward.

     There is value to those organizations to optimize the efficiency of that process.  If we are not getting the right answers now, then perhaps there is an opportunity to reallocate some of those resources, recognizing that in the interim you have to continue.

     I also think there is a need to think about what the scope of the data is in the short term.  We have talked about the federated approach, relying on communication of claims based data systems, the linkage that is required.  All of these things will take resources.  I still think it may be interesting to think of some of the novel long term potential proposals from yesterday's smorgasbord of discussion, the pharmaceutical perspective, the patient direct to physician, and whether any resource pool that could be generated from this controversial proposal could be in some ways used to direct, enrich and fertilize that possibility.

     MS. CRONIN:  I just wanted to add on to that concept, and what Chris said earlier.  ONC has had a lot of conversations in the last year or so with pharma representations.  While they have been informal, I think there is this emerging idea that is building and a huge interest. 

     They want to participate in this infrastructure development and the NHIN on a local, regional or national level.  They are looking for the right mechanism to do that, so that they are not being perceived as too self interested.  I think there is a possibility that through an institute or some type of nonprofit entity that those kinds of contributions could be made.  I think people are just exploring those ideas at this point.

     So in terms of industry's role in the emerging infrastructure, I think that hopefully there is some potential role that would be safe and appropriate and perceived in the right way.

     I also think, building off the federal health architecture concept, that is real, although a lot of people think it doesn't go beyond a bunch of PowerPoints.  There is this opportunity for VA, DoD, CDC, FDA, all the public health agencies, to be working together on how the federal health architecture is going to intersect over time with the NHIN, so that data from electronic health records as exchanged across local, regional and interstate as necessary, that that is also interoperable with the government systems, and we can be using that data as appropriate.

     Again, that is a long term vision.  I think you are focusing on what is practical in the next couple of years.  There is a whole host of data sources out there, and a lot of very specific needs that have been articulated this morning, both by some of the public comments and the panelists.  It would make a lot of sense to be very focused on the type of surveillance activity that FDA is most interested in by the type of medical product, map those out to the data sources that are available, and be realistic, knowing that devices with no unique identifier for the next couple of years at least are probably going to be dependent on registries or where the utilization is well documented.

     But for other kinds of interest and types of surveillance that lend themselves to the health information exchange in Boston or Indianapolis or where they are operational and the data quality is there, that those efforts could be pursued in the near term.  So you have a short term strategy that is clearly based on real data sources that are accessible today under your current resources or collective resources, that you are merging over time to the infrastructure that is being built on a regional and national level.

     I think that collectively across HHS, there will be some funding opportunities to do that.  Also, as Ken and Chris articulated, as we start to develop these use cases -- and for those who don't live in the IT world, they are really just opportunities and scenarios that clearly articulate what is involved with adverse drug events surveillance or medical products surveillance.  So as the NHIN get deployed more broadly, the requirements for that type of surveillance would be built into those systems, just like they would be incorporated into a certification process for electronic health records.  Electronic health records over time would not only be capturing the kind of data you need, but they would have the kind of functionality to be able to report out that data in an interoperable fashion.

     So it is clearly a system approach.  Perhaps more this afternoon we can talk about what those incremental steps might be.  But I think clearly we would like to talk to you more about how we might be able to help you.

     MS. CUNNINGHAM:  It is good news, listening to what Kelly was saying about being able to aggregate that information.  We have talked about it.

     One of the things that we continue to do is work with what we have, with the hopes that there will be a larger system and we can get more information.  That is one of the things we continually struggle with, especially with new molecular entities.  So the dream would be to have an ideal system where all the data are ultimately aggregated. 

     I think that can happen down the road for sure.  Then we will be able to get certain information more rapidly.  I think what we do in the meantime is, and that is something we need to think about, what is your ultimate goal and what is your ultimate end point.  I think for all the groups around here, they are slightly different.  I think what each individual group needs to think about is what is most important for them, and for us to try to tap that, so that we can get information back relatively quickly.

     I know from our health care system it would be great to have funding strictly assigned for effectiveness research.  That was not the case.  Certain information finally came out, certain RFAs came out, effectiveness research is being done.  Great, someone is doing it, we can start looking at that. 

     Other areas, you want to see how you drill down certain information and evaluate intervention at the patient level.  We have a great health care system where that is done very easily and quite often, and that needs to be channeled back up so you can see what happens, and you can see how best to intervene in different areas.

     So I think as this all comes together, we need to think about what is most beneficial for the individual groups as we work towards building that ultimate system.

     MS. LORRAINE:  For the last few minutes we have been talking about integrating and linking different sources of data.  I think that is very important and we need to pursue that some more. 

     But before we go down that road a lot farther, I want to come back one more time to the theme of the purpose of the sentinel network that has been raised several times.  I would like to have the sense of the panel as to whether you think we have sufficiently articulated that.  I think it is very important to know what we want out of something before we start trying to design it.  I agree with all those remarks.

     So yes, we have consensus? 

     MR. MC DONALD:  Could you restate what it is?

     MS. LORRAINE:  The purpose of our -- I think we have heard a few different things being said this morning, so I am wondering if there is someone on the panel who would like to take a shot.  I think Barb had one formulation this morning.  Would you like to restate what you were talking about? 

     MS. RUDOLPH:  Sure.  I think what I was trying to say was that sentinel events to me -- or, the network ought to focus on things that are directly related to increasing morbidity and mortality in patients, because I don't think we can cover the waterfront of effectiveness.  I think it is too much. 

     MS. LORRAINE:  So you see those things as readily separable, safety and effectiveness?

     MS. RUDOLPH:  Maybe not readily.  But I think when you get further away from morbidity and mortality, we are going to have a much harder sell to the public in terms of their giving up their information about themselves, and also to providers.  I think providers are going to be less likely to want to give that up as well, because they are measuring effectiveness.  Many providers across the country are doing quality studies and whatnot.

     So I think the further you go towards the effectiveness side, the less support we are going to get from the public and from the providers, so staying as close as we can to events that occur that are related to morbidity or mortality is going to be a much cleaner sell to everyone.

     MR. MC DONALD:  I take a counter view.  Maybe we should think about this, as Chris said, an incremental process.  Maybe you do start with what looks like rich fruits

     But having said what you said, I couldn't operationalize that.  If you said we are looking for these events that the FDA now lists as being bad deals, you could operationalize that, but once you get into what is causing what in morbidity and mortality, it is tough.  If you are going to dig all this data, you might at least be prepared to use it for other purposes.

     I think the only problem is the connections and the politics; there are two problems.  Ken's issues are key.  You have got to keep coming back and focusing on these detailed issues.  For the devices, I think you can get codes out of CMS that will handle your getting started. 

     One way to get the final excellent classification systems, there are two of them that exist.  They don't go down in deep enough detail, but they can make J codes and those other kind of codes fairly fast.  I think they did for some of the distinctions and some of the devices.

     So I think the goal should be heated passion or being hot to get something done, and not make it so that we get blamed if something doesn't get done, because it is too hard. 

     MR. BRAUN:  I'll just throw something out here.  I seem to sense consensus that what would be done would be something that would be done in the short intermediate term, at least for purposes of this discussion.  We are not talking about long term planning.

     I think it is my sense that there was agreement that it ought to be a large system, and we can put numbers on that.  The word federated was used, so that implied large.  This system would be able to obtain information about exposure, so medical product in the jargon called exposures, and be able to do that with some precision, and also assess outcomes or adverse events with the ability to not rely simply on computerized data, unless it were in an electronic medical record.  Then that would be okay, but absent a medical record, electronic medical record that one would need to have the ability to confirm the outcome or the diagnosis by going to the medical records.  To make those exposure outcome associations, one would need to be able to obtain information on other factors that might influence or in the epidemiologic jargon, confounders.

     So I think that would be essential.  I'm not sure we got into that level of detail, but I will just throw that one in.  That would allow us to assess the safety of the products.      I think since we focused on safety, that is the system that I think most people were talking about.

     The issue of effectiveness was raised.  I would just say that if you had a system that was excellent at assessing safety, you probably would be able to do a reasonably good job on effectiveness; that would come with the territory.

     So I will throw that out there.  I think from the FDA perspective there are a lot of signals out there, so this could be primarily for testing and getting answers to safety questions, rather than simply generating them.  Although again, just like I said about effectiveness, a system that was excellently suited to assessing safety risk probably would also be able to generate hypotheses and signals as well.  That would be another benefit.

     So I throw that out as what I think I heard, more or less.

     MR. DAL PAN:  I think Dan Budnitz touched on a lot of this before when he said there are different levels of things we want to know about.  I think we do want to know about all of them.

     First of all, the really bad adverse events that are drug related our current system does a reasonably good job on, but we can always improve on.  So that is something we would like to use this system for.  But then building beyond that, the things that are harder to tease out, the myocardial infarctions that may be drug related or have drug as a contributing component, things like hip fractures that Ken mentioned before.  Those kinds of things our current spontaneous reporting system isn't good at.  I think with the right methods development and validation, a system like this could help us with.

     I think the point too that having a better understanding of things we already know, why certain people get adverse events that we already know about, maybe identifying risk factors and interventions. 

     I think Miss Paxton's presentation yesterday, while it wasn't a drug related adverse event, you could use the same model.  We know total joint replacement redo's are an issue, postop infection is an issue.  We can study this, we can identify risk factors, make an intervention and assess the effectiveness of that intervention.  That would be really ideal, if we could understand that about even common adverse events that we already know about.

     So those are the three things.  I agree with a lot of other people that it might take a tiered approach or an incremental approach rather to get there. 

     MS. LORRAINE:  Jean or Ann, would you all like to comment on this from AHRQ's point of view?

     MS. SLUTSKY:  Yes.  AHRQ is on that border.  We are quite concerned with patient safety and effectiveness, and see it as a continuum.  Not having any real regulatory power or mandate, our interests are primarily in providing the support for doing this and funding the research and what we can in terms of infrastructure.

     Also, one of AHRQ's key roles as many of you know, because we are a rather small agency, is that of a convener.  We often are able to bring together groups that wouldn't normally sit down and break bread together to talk about issues and to try to form some consensus. 

     So for us, we are trying to get a bit of a feel for how we can be helpful and how we can be an incubator, and how we can further both the infrastructure development, methodology development, as well as some proofs of concepts. 

     MR. PLATT:  Do you have any interest in trying to have the group prioritize the many useful things we would like to see a sentinel system do? 

     MS. SLUTSKY:  Yes.

     MR. PLATT:  Because we might use our resources differently if we do that.  Can I put something up?

     MS. LORRAINE:  Please.

     MR. PLATT:  I think Ron Krall was right.  We ought to have a system that is able to rapidly identify excess risks of things we have reason to be worried about.  This will differ from a therapeutic agent or device to therapeutic agent or device, but the system ought to routinely do that for every new device and do it as quickly as possible.

     It needs the capability to do hypothesis testing quickly when those signals are generated, because no matter how sophisticated we are, a lot of those signals will not be ones that ought to drive decision making or regulatory action, and we will need to know that.

     I tend to say the next thing ought to be the ability to understand quickly whether therapeutic agents are used appropriately, that is, as intended.  We ought to build that into the system.

     Next, I would say we ought to have the capacity to detect unexpected adverse outcomes.  I prioritize first for the things that are common over the things that are uncommon.

     I have a fairly steep gradient on those.  I would say for sure-for sure we need to do the first three, and we ought to attend as we can to the ones after that.  Since I was an advocate for the effectiveness piece, I subscribe to the notion of saying it oughtn't be the first thing we build into the system, but we ought to be thoughtful about making sure that we build a system that makes it easier rather than harder to do effectiveness work.

     MR. MANDL:  I agree with Rich's prioritization of things that the network needs to do.  I think that it is still going to be important for the FDA in particular to think about what the limits are on what responsibility it wants to take for what the network can do. 

     If in the near term we design a network let's say that does not have 200 million lives covered, I think it is important for success to define the metrics of success according to what the network is being designed for. 

     So I think we should be very attentive to the proper match between the goals and the network we believe we are going to design, and do that in phases.  The metrics of success should match the capability of the network that is designed.

     I will emphasize again that the points I rattled off earlier were not intended to be obstacles, but rather to inform design requirements.  I think that if you look at the work that is going on in ONC, the kinds of architectures that are being proposed for information exchange, take those design requirements into account, first and foremost.  Some of these lessons have been learned over and over again, and we can take advantage of them as we go forward.

     MS. CRONIN:  I just wanted to follow up on that.  I think if you have priorities from a public health perspective in the near term for the sentinel network as you define it say in the next one to three years, we can use those priorities in thinking about how to then build out the longer term infrastructures, so that over time, as Chris and others articulated, the conceptual framework for a sentinel would be integrated in part of the NHIN.

     But I think we need to be mindful of those priorities that are somewhat based on feasibility in the short term and what data sources you have available now, which are quite disparate and different.  How those will then -- are those the same set of priorities you want to inform the future, given your overall needs.

     So I think we need to be careful in articulating those priorities and how you would want them to feed into the American health information community as the Secretary and others start to prioritize how that feeds into all the other processes we have in place.

     MR. MC DONALD:  I would take the proposal you said and cheer for it, but make a point five in front, or part of the first one.  That is, to be even more specific about the first step, to take the designated event list, maybe do a quick review -- there may be some that are really tough -- and say, we are going to tackle these, be able to find these.

     With that, you can model where you can get the data and what the challenges might be, and some of the limits of detection and detectability. 

     The drugs are all computerized in the country and labs are all computerized in the country, and most bills are computerized.  I would wager you would go a long way with those three things if you could tap into them. 

     MS. CUNNINGHAM:  I would like to concur with what Rich said, but I think emphasize the appropriate use to a large degree.  A lot of the adverse events that one sees are due to inappropriate use.  So that is easy to do, evaluate that.  You can tackle a lot of what is already out there. 

     MR. GROSS:  I would second Rich's priorities list, with an emphasis on truly new devices, not the next generation of a device that has been there for awhile, at least as a first initial effort.

     The third point about whether it is used appropriately.  In the device world, much of the stuff is used off label.  So I don't know what that means in terms of appropriate use.  So maybe we could all talk about that a little bit.  Drug using stents is a classic example, where virtually 75 percent of its use is off label.  That presents some interesting issues.

     With regard to effectiveness, in the device world I think it is the opposite side of the same coin, safety and effectiveness.  If a hip implant fails, it is not effective and it is a safety issue.  So to us it is one and the same. 

     MR. CALIFF:  I was resonating with what Clem said, but trying to put them together.  We have got an increasing number of things that are numbers now.  The only things that are not numbers are the things that really matter.  It is the interpretation and putting it together.

     I think what Rich said made a lot of sense.  It is almost as if on this time scale and difficulty scale -- I agree with Fran that it is in view that you could have everything together, but it is probably between ten and 50 years, depending on who you believe.  There are some things that are easy to do in the immediate framework, but unless you build a model to get to that ultimate point, which I think is what I heard several other people say here.  You lose out.

     I think an important part of the model is identifying places where we can codify plentiful knowledge more effectively now and begin to develop some examples of where it can happen.

     Just to pick on my own people a little bit, I think an advantage of that is, it would draw in the professional societies and the academic medical centers to actually start playing ball.  I think everyone else wants to do it.  Government is somewhat prohibited by all these things that happen in politics that none of us control at times, but it has been hard to get the professional groups to ante up and do what they ought to do, and academic medical centers tend to be introverted unless the NIH puts a fixed nitrogen carrot out here to do it.

     If you begin to show how codifying clinical knowledge in the context of all these numbers that we collect begins to have an impact.  I think drug eluting stents will be a case where that happens.  I think you will draw more people into it and get to the upper right-hand corner of the diagram more quickly.  But just trying to do that wholesale won't work, I think is what everybody is saying.

     I am just wondering if there is a way to formulate some examples where it could work to draw more clinical groups in, would be worthwhile. 

     MR. RESNIC:  I just wanted to add -- and I'm not sure where in the prioritization the experts would put this, but three I'm not sure I heard Dr. Platt mention. 

     One was the continuous nature of prospective monitoring for either the events we believe we must be monitoring for, versus those that we weren't anticipating monitoring for.  So in a drug eluting stent experience, it wasn't expected that there was going to be this late thrombosis risk.  It was an outgrowth of late observations of some clinical registries, and then the trials.

     The second and probably what a lot of folks have talked about is the validation piece, validating that whatever is detected is real.  Then there is the communication piece, mechanistically what is the process of communicating and distributing. 

     This gets into one of the priorities from the original public notice.  You want a bit of a dynamic system.  So I don't think what we want to do ultimately is having a network that requires a statistician to design a new study for each prospective drug for the same seven or ten outcomes.  I think we want to have it as automated as possible, as fluid as possible, and it is launched.     These are general adverse events that we are studying for.  These are specific adverse events we are studying for, this is the time period we are looking at, and move forward to the next one.

     MR. MANDL:  I think another point that has emerged in the discussion that should be explicitly addressed, although I don't know exactly how yet, is this idea of prospective surveillance, claims based, et cetera versus the registry approach.

     I do think that with the current set of technologies that are being explored on, including the use of electronic medical records, personal health records and other communication tools with providers and patients, that capturing high quality data is a potential goal of the network, but one that is probably a second phase goal.

     Thinking about registry versus data mining with pure secondary use data is probably worthwhile, and may in some way relate to expectations, requirements and regulations at some point.

     MR. BUDNITZ:  I just want to say, I agree with the prioritization that Rich put together.  Also I agree that probably we should focus on the newer devices and drugs.  But be mindful that probably in terms of public health burden, as a consensus panel with CERTS put a few years back, as Brian Strong said, it really is older drugs used poorly that probably caused the greatest public health burden.  Just be mindful that we are probably missing that in the system.

     MS. LORRAINE:  Anything else from the panel right now?  I would like to open the microphones up to our attendees to see if they would like to comment on the topics we have been discussing.

     MS. BEACH:  My name is Judith Beach, and I am with quintiles.  Today we have heard a lot of discussions regarding issues of data aggregation, use of claims data, linking different sources of patient data and unique patient identifiers, longitudinal tracking and HIPAA compliance.  So I was thinking that it might be useful to mention another private sector model that hasn't been discussed here yet for the sentinel network.

     It is through a company called VeriSpan.  Quintiles was a cofounder of VeriSpan through a joint venture.  VeriSpan is the largest provider of de-identified provider of patient centric, HIPAA compliant longitudinal data delivered in near real time through VeriSpan's identification engine.

     This is a patented, publicly available engine that provides de-identified unique patient identifiers.  VeriSpan uses it to link half of all the prescriptions written in the whole United States, and 20 percent of all medical and hospital claims.  It is cross linked for each patient.  There are 150 million unique U.S. patients in this database.  They are longitudinally tracked up to five years so far.

     Indeed, for the past 18 months, FDA's Office of Drug Utilization has been using data from VeriSpan to track utilization by drug.  It is especially useful for FDA in the contraindicated concomitant medications, for instance, if a patient is switched from one drug, say Vioxx, to another and tracked there. 

     But FDA or others could have use of this VeriSpan de-identification engine that is publicly available, and it could be useful for this sentinel system, and I just wanted to make sure everybody knew about that, because we were mentioning it.

     MR. ROBINETTE:  Good morning.  My name is John Robinette.  I am going to speak first as a software engineer.  Listening to the conversation the past couple of days has been very enlightening and interesting.

     One thing I would like to bring up or reiterate that was mentioned earlier was about measurable outcomes and metrics.  Once this system is deployed or before it is deployed, it is going to have to go through some validation of sorts, and being able to determine our level of success or not, it is going to be very important to know if we are doing the thing the right way or not, and how to move forward.

     The current ARS system gets maligned a lot recently, but when it was conceived of, were those measures in place to know it is okay to find an adverse event after one month, two months, ten months.  Those kinds of things aren't clear yet, so how we design it and how robust it is is going to depend on what those measurable outcomes are defined as.

     The second thing.  We were starting to hear the conversation go towards priorities and some of the specifics as we design.  Let me suggest some things I heard around that.  There are issues of data quality, data standards, both the structure of the data and the taxonomies used.  Those lead into interoperability issues.  There is data completeness issues, both from the aspect of, are we receiving all the adverse event or outcome data in the system, and for those that receive, are we receiving all of the data about that event.

     Then there is the data analysis issue, the science behind it and the algorithms that would be developed to understand the data coming in; the people who would be doing that, do we have the right trained cadre of people to execute is also something.

     So as we continue this conversation, it gets more and more and more complex.  I imagine as we keep talking about this, we will keep adding more and more to it.

     My list is just my list.  What I am suggesting is that there be a list to come up with discrete components or modules or functions to help us focus on specific parts of the solution and roll it out, based on those priorities for funding and resources and so on.

     The other thing I would like to bring into the conversation, now as a citizen, as a father of two young boys.  Both are current on all their vaccines, I will say.  The other parents that my wife and I hang out with are all fairly smart educated people.  Yet, there is a fairly consistent concern on, should I vaccinate my child for flu this year, or this or that vaccine.

     It is appalling to me in this day and age that people should question the safety of vaccines.  As we heard yesterday or today, it is a very safe way to prevent very serious diseases.  Yet people are really struggling with this issue in the public.

     What I would suggest that the federal government could bring to the table is, the most important thing it could bring to the table is its trusted brand, which has taken some knocks lately.  As you roll out any system, you need to get stakeholder buy-in on this to be successful.  It doesn't necessarily matter that the technology or the superiority of a particular technology over another.  There are examples like Betamax versus VHS.  Not necessarily the best technology wins.

     I think it is very important for the federal government to re-establish its trust with the population so that as we move forward, there is a receptive environment for this to work. 

     I think a side effect of this current lack of trust is that we are currently -- and all of us, I am talking about -- operating in an environment of fear to some extent, which makes it difficult to take risks.  We need to be able to move forward by taking some risks and knowing that there is going to be some failures along the way, and to be able to learn from those failures and get better as we go forward.

     So those are my thoughts, and I'll leave it there.

     MS. LORRAINE:  Thank you very much.

     MS. WEST:  I have two comments today.  I think we need to recognize that a comment that was made by Susan Sachs is a critical methodologic issue.  That is, we are using a significant to both identify and assess a signal. 

     Typically in research we use a split sample when we are trying to identify something and then we are trying to validate it as well.  So we need to be careful about what the purpose of our sentinel network would be, whether it is for identifying signals or assessing them.

     The second comment that I would like to make goes very much to the linkage issues that we are talking about.  We need to recognize that our health care system is very fragmented, and often patients are moving from health plan to health plan.  One of the things that makes it very difficult is that when patients move from health plan to health plan, we lose the longitudinal nature of the database.  That is why the VA and the Medicare population is so critical.

     In fact, the H2 beta blocker and hip fracture association, if you recall, that was identified in a U.K. PRD.  Why?  It is a longitudinal database.  We need those sorts of data for looking at long term outcomes.

     So I think where the government could really help is in making the population, the U.S., aware of the fact that we need to be able to link these data systems, and that we have to have a way of doing that for drug safety. 

     I know Clement says that we will not have a patient identifier in his lifetime.  I'm not sure they will have it in my lifetime, either.  But if we don't make a push for it, we will never get it.  So that is where I think the government can really help.

     MR. WILKOFF:  My name is Bruce Wilkoff, physician at the Cleveland Clinic Foundation, and I also represent the Heart Rhythm Society.

     Just a couple of observations.  We had sentinel issues over the last couple of years over implantable defibrillators.  We have reflected deeply about those things, and these are my observations.

     One is that there is a lot of this that we haven't talked about which has to do with communication.  The population is extremely risk averse.  While I am very much in favor of a sentinel network and understanding what is going on, we will by nature of this detect things; that is what we are looking to do.  When you detect these things, you are going to communicate it often.

     I don't think we have developed any consensus about what is an acceptable risk.  Matter of fact, it is extraordinarily irrational.  So developing this network without developing a methodology for communication, a methodology for putting in perspective these things -- we talked about safety and effectiveness.  As a physician I treat very sick people, very high risk for dying, with life saving but dangerous therapies.  Dangerous therapies are a risk to these things.  I put these things in perspective all the time, but the population doesn't do this. We are talking about very low level risks that people are just not willing to take any risk, often. 

     So I see the danger of having this discussion, and I am not going to solve this for us, but without us developing this part of this, and I think this is way under developed.  The problem is not so much that we don't know about these things often, it is that we don't know how to talk about them once we have them.  So I see that as an extraordinary thing.

     Why do I know about this?  Because it is actually quite a bit easier to detect some problems with an implantable defibrillator that reminds you with a shock, or with telemetry that I can collect.  A lot of these events about collecting quality of the data don't exist necessarily if we would just take advantage.  There is an opportunity for the sentinel network, for instance, with remote telemetry of implantable defibrillators to do this.  You could work out a lot of these other issues. 

     I can get you quality of the data on large populations of patients, well qualified patients.  I can get all these things done, the things you are having trouble with, yet I still won't know how to talk about them, and I won't be able to communicate about the risk.

     So I think two things to say.  One is, we have to work out that conversation, understanding, communicating about risk.  The other is, I think that while we may be more worried about the drugs and other things that are going on here, a lot of the problems that aren't a problem in terms of collecting data or devices wouldn't have to be.  There is an opportunity to do this with some devices, at least.  Maybe we could work out some of these problems even as a test thing, not the final thing.  But there are so many roadblocks that we should use what we have as an advantage here.

     MS. LORRAINE:  Thank you.  I want to say that I think we all recognize that communication is a huge piece of this.  We have bitten off maybe more than we can chew today, but we definitely know we need to address that issue as well.  Thank you, point well taken.

     MS. STEVENS:  My name is Lee Stevens.  I work for the Food and Drug Administration Center for Biologics.  My role there is strictly in the area of data standards, so many of the notes I have taken and comments I have are specifically related to that area. 

     Also, I invite many of you to begin to partner with us, because FDA is very resource constrained.  I think to get some of this stuff off the ground in terms of trying to tackle some of the standards or some proof of concept testing and that kind of thing, we are going to need some help from some outside partners.

     I wanted to first of all state that in HL7 there is a lot of work going on in the public health and patient safety area that can address, at least in terms of a common data standard or format for which people can begin to exchange data.  There are standards available out there that we can begin to start testing with.

     For example, in my committee, patient safety, we have two messages that we are working on.  The individual case safety report is a draft standard for trial use, and also the patient safety generic incident notification message, which is being driven by the U.K. national patient safety agency.

     So the good news is that there is some standards work going on.  The bad news is that we really don't have a lot of organizations that are really lined up to begin to do some proof of concept testing to see whether or not the standards are robust enough to move the data that you have buried in your systems. Putting in some type of standardized format, again that implies the use of some standardized terminology, so that people can begin to look at data the same way.

     So I offer that as an opportunity for people to think about in terms of amongst themselves thinking about some limited proof of concept testing to look at the standards that are already available and to see whether or not you can make use of them to exchange data.

     Also, there is a public health emergency response sig in HL7.  Even though patient safety and public health share the same domain in HL7, it is very interesting how the public health and patient safety people have very different views about the same data. 

     So there is a lot of discussion going on in HL7 in terms of trying to come up with some standardized terms or rules of engagement in terms of how to understand what is the public health domain and what are the kinds of activities that should be modeled against it.  So that is another area.

     There are some messages that are being developed, one for outbreak protection, investigation requests.  Also in the patient care committee in HL7, they are working on an allergies and adverse event and intolerance message.  So there are a lot of things going on in the HL7 standards arena that perhaps we can begin to start thinking about in terms of trying to look at these standards, look at the data that we have in your individual systems, to see whether or not you can begin to format them and exchange them in these formats.

     The other general comment I have is, it would be nice as a way to think about moving forward is to eat an elephant a bite at a time, in that perhaps we can think about some very limited use cases that we can begin to think about trying to design either some clinical decision support queries against these large databases, based on a particular diseases or adverse event list or something.  Here are all the elements that we want to collect and build a query against our systems, to see whether or not we can pull the data out, get it formatted and transmit it somewhere where another party can review the data.  I think that might be a way to move forward, taking advantage of some of the standards that do exist.

     Then the last comment I have is the need for the activity that is going on in HL7 to be more aligned with what is going on in ONC and AHIC.  I was a member of the FHA public health surveillance work group over a year ago.  All of the federal agencies that were involved either in patient safety or public health type missions or had some component, which also includes agencies like EPA, USDA, Indian Health Service.

     We all started looking at one, the data we collect and two, who we communicate with, and we came up with a very high level diagram to start working on a baseline set of data elements that we can design, messaging against it, that eventually could be used to build a public health or patient safety use data set.

     Once things got reorganized under the ONC, that group fell aside, and then some of the use cases that the biosurveillance group are working on right now are not aligned with what is going on in HL7.  So I think that is another area that the federal government can help in terms of driving the standards, that we begin to start developing some immediate standards that we can use to address some of the issues that we are talking about, and then roll that into the stuff that is happening at HITSB, because right now there is a disconnect.

     MS. LORRAINE:  Thank you very much.

     MS. CRONIN:  Can I respond to that, Catherine?

     MS. LORRAINE:  Sure, Kelly, go ahead.

     MS. CRONIN:  I recognize it is a huge undertaking to try to get all the standards in this harmonization process.  There are over 12,000 volunteer hours that went into trying to name the standards and get intellectual property specifications together for medication history and labs and a handful of other areas.

     The standards development community is really committed to making this happen, but we do need to be very consistent with the priorities that are set by the Secretary and informed by a multi-sector group, the American health information community.  It is not a perfect process now.  We realize we need to improve coordination across all the SDOs who are participating.  There are 260 organizations that are participating, trying to be as inclusive as possible and get a reliable process that people can count on in place.

     We are only through one round of intellectual property specifications so far.  There is a need to be thinking as we move forward what are the specific priorities in this area so we can advance them in such a way that it really does match where everyone in the medical products surveillance world, particularly the FDA, thinks we need to be going, but also leveraging what has already been done.

     There is a lot of work around medications that has already been done.  The CMS and AHRQ have sponsored pilots for e-prescribing.  Many of those standards have been tested, and there is a final report being written up now, to understand what more might we have to do.

     I also think that the concept of testing that Lee just articulated is incredibly important.  It is one that the Department is acutely aware of in trying to figure out exactly how we are going to build into the whole harmonization process more definitively, so before the certification commission for health IT builds in an interoperability specification or requirement, that we know it is a mature standard, it is ready to be into electronic health records, and as we move forward it will be part of certification of network services as well.

     So I think that we want to be coordinating with HL7 and all the important SDOs.  Becky Cush I noticed is here from CETUS.  They have been an instrumental player in HL7 in this particular space.  

     So I think a lot of your interests are well represented in the current standard harmonization processes.  In the next year I think it is highly likely that your priorities are going to be advanced and considered.  That will build on the interoperability specifications that have already been submitted to the Secretary and will be recognized formally within the year.

     I just also want to point out, it was good to see in the Federal Register notice that the intent is for sentinel to be based on these international and national standards that are adopted by the Secretary.  Right now, that is the process that is in place to do that.

     MS. LORRAINE:  Thank you.  I will ask our last two commenters to be crisp in their remarks.  It is almost lunchtime.

     MR. GUNN:  Peter Gunn with IBM Health Care.  There has been a lot of very good discussion about the large amounts of various data available and the difficulties in getting all of that together.  I wanted to ask whether we could talk some more about the other end, where there has been an identified adverse event.  Are we satisfied with the way we validate and qualify those gross event reports and is there a way to get that data out in a more meaningful way, not just to doctors and patients, but to the public at large?

     MS. LORRAINE:  Thank you.

     MR. HOLLIDAY:  Sam Holliday with Extensure.  As you may know, we were one of the four ONC contractors to build an NHIN prototype.  One of the things that we did in parallel was to work with four pharmaceutical clients to look at how the NHIN could support clinical research.

  

One of the two priority areas that the e group identified was medical product safety surveillance and how NHIN could enhance and add to the processes that are in place today.  I think several people in the panel mentioned that they hoped that the longer term vision for the sentinel network does include interaction with the NHIN and how that could support the sentinel network and interact with it.

     In listening to today's conversation, a lot of the group of pharmaceutical companies dealt with a lot of the same obstacles that were talked about today, about interoperability, identifying patients, building a longitudinal record, how that could be used to support it.  We came up with three scenarios that also paralleled a lot of today's discussion.  Using electronic medical records to enhance the quality and potentially the frequency of adverse event reports through making it easier to report those events.  Second would be how to use large databases that do exist to do signal detection, and the third being how to validate those signals through data analysis experiments and review of large databases.

     I just wanted to mention that we did develop a use case document which I believe may have made its way to ONC already.  We will make sure that we do submit it to FDA for review as well, but I think it covers a lot of the things that were talked about today.  Certainly this isn't the end all-be all, but it might be a good thing to look at as a starting point for future conversations about the sentinel network.

     MS. LORRAINE:  Thank you.  I hope you will submit that to us.

     MR. GUNN:  Yes, we will.

     MS. LORRAINE:  Thank you. 

     MR. SHUREN:  It is noon, which means it is time for lunch.  We will let folks break until 1:30.  I will let you know, for our invited speakers, they are being whisked away to a closed room, and Catherine is going to go join them and have a little offline conversation and push a little bit more on some of the next steps.  After lunch we will get a report back as to that discussion.  So we will pick up again at 1:30.

     (The meeting recessed for lunch at 12:04 p.m., to reconvene at 1:45 p.m.)

A F T E R N O O N  S E S S I O N    (1:45 p.m.)

     MR. SHUREN:  Welcome back.  Hopefully now everyone has eaten, time for a nap.  We built in enough time for everyone to take a siesta before coming back to the table.

     Now that the folks who are so kind to have come on invitation have been sequestered for a period of time, maybe there is a -- I won't say a verdict, but I know there is some input that we got out of that meeting. 

     Catherine, I know you were moderating a discussion during that time, so maybe you would like to fill the rest of us in on what you all have concluded, recommended or thought.

     MS. LORRAINE:  Well, it was a very lively discussion.  Everyone was learning from everyone else, I think. 

     The first thing the group agreed to were some priorities for the system.  The first level would be rapidly identifying things that we should all be worried about, and that we should try to come up with event lists that would help focus peoples' efforts in what to look for.

     The second goal would be to quickly test hypotheses, validate signals that come from the data.

     The third function would be to try to determine whether therapeutic products were being used appropriately in practice.

     The fourth level activity would be to try to find the completely unexpected event, and the fifth piece, which we didn't have a lot of time to talk about but which is very important and was recognized in the discussion this morning, is effective communication of this information to clinicians and to patients and other members of the public.

     There was some general agreement about where we might start to operationalize this, if you will.  One of the first thing that has to be done is that the sources of data need to be identified.  There are a lot of people in this room who have data to share that could be helpful.  We have got federal partners, some of whom are here.  We have the states.  We have health plans, we have CTSA, we have a variety of people.  We need to get a good grip on who has got the data.

     Then we need to do what people called either pilot testing or proof of concept testing, which would be an effort to give the various owners of the data some standard cases, some events that we are interested in, and have them test their data sets to see whether they can find those kinds of events; so can you find the Vioxx event with the data that you have.

     I think there was agreement that there would be some interesting pieces of information that would come from that.  Gaps might come, other kinds of important information would turn up from that.

     There was a really interesting idea.  Chris was discussing the fact that NIST has a very interesting program that they use, in which they invite all comers to test their methodology and their skills in identifying a planted signal in the data set, and that this is a competition that universities, corporations, individuals are very anxious, various entities are very anxious to participate in.  NIST grades the participants on their ability to find the planted signal.  So that was raised as a possibility that might be tailored to our circumstances.

     Did I leave anything out?  There were many other details that were discussed, but that was the basic outline.

     MR. CALIFF:  You didn't mention our lambasting the federal agencies for not working well together.

     MS. LORRAINE:  Oh, I forgot that part.

     MR. SHUREN:  Don't blame us.  Blame the other agencies.

     MR. CALIFF:  I think it is important to say that -- that I think there was agreement that there is a relatively small set of big gorillas that can make this happen.  There are many others that can contribute, but essentially the things we heard about difficulties between CMS and VA and DoD and FDA, those are actually databases that if they participated in these -- if FDA could even look at its own data that currently belongs to companies that people can't look at, you would find a lot of things in the competition, that there would probably be some big winners.  That is an important part of this, I think.

     Most of my learning is from Rich.  Not about what I just said; that was a personal opinion.

     MR. PLATT:  I'm sick and tired of being Rob Califf's personal punching bag.  I just want to tell everybody that.

     Point well taken.  In recognition of that, VHA and FDA signed an MOU just a few weeks ago on data sharing.  In fact, one of the reasons Fran is in town is, we are going to continue that dialogue tomorrow. 

     I think you are quite right.  We have things, we are interested in what the other has.  That includes also expertise.  We talked a lot about data.  The other is expertise.  There is a great need for more, but we have some people who are here who are very good at it, and they are a limited known quantity, and we need to make good use of them.

     So point well taken. 

     MR. SHUREN:  Let me ask then, based on that, in terms of the list of priorities, number one was rapidly identifying the things that we should worry about.  It sounds like a charge coming back is, could we the government better articulate what those things we should worry about in fact should be.  That was a question. 

     MR. CALIFF:  We did talk a little bit about that.  One of the questions is, when a drug or a device or a biologic gets a nod from the FDA. 

     I don't want to ruin my consulting business, but I do a lot of consulting with companies.  They know based on the biology what has been observed as very low level signals in testing.  They know what the possibilities are, and all companies do their best to look for those.  The question is, is there some way to get that list to the sentinel network without -- the discussion was, can you do that without creating a nightmare of over reaction to what might be possible. 

     The example we are talking about now is the COX situation, where you knew from the biology almost from day one there was a possibility of thrombosis dominating vasodilatation and anti-platelet effect.  It didn't show up in the early clinical trials.  It excluded people who might have coronary disease.  But it wa always a topic of some discussion at some level.  You wonder if there had been broader knowledge beyond the companies, and then the sentinel network could define the events.

     If you are just throwing out a broad net without data definitions, you have a lot of garbage in the system.  But if you had what Rich brought up -- this is not a punching bag, this is giving him a compliment -- he said if you had that list of 12 or 15 possible things that aren't signals but could be signals that could be looked for, you might do a better job of finding out early whether they are things to worry about or not. 

     MR. PLATT:  There are some things that would be constants.  You would probably be interested in them for every new drug or biologic.  There are other things that will be context specific, that you need a lot of knowledge about the agents or the preclinical experience.  But having that be stated up front makes a huge difference in the way you would go about looking for signals.

     Conditional on doing that, it is well within FDA's grasp to use its existing resources and create some new ones that would be able to be much more systematic about early signal detection than the current situation. 

     MS. TRONTELL:  I think having a list of priors, if you will, and starting to look at the data like Rob suggests is very helpful.  I think FDA operationally operates with a designated medical event list for signal detection and looking at the spontaneous reporting system.  So they are the events that are often associated with drugs, but include things you might -- the list is about 20 items, so we might already have some of that in place for drugs.  I think devices may have a different set of things they look for.

     MR. SHUREN:  What I am hearing then is, maybe there is almost three tiers of what you may be asking for.  There are some things, you may be interested in just about everything coming down the pike or close to that, that designated list.  Then there may be for this particular drug we saw in clinical studies, there were events we know about, but it is within this very controlled condition, the small sample size.  Maybe we wish to highlight those.  It will be in the labeling, but there may be ones in particular that we would like more information on as it goes into real world use.  Then maybe there is another set that is unclear based upon the clinical data, or maybe some concerns because of biological mechanisms, but we didn't see it there, but we flag those and say we have some interest, should you identify that there may be an association between this sort of event and that drug.

     I don't want to put words in your mouth, but is that how you are thinking about it? 

     MR. PLATT:  I am actually thinking of two tiers.  The things that there is a prior for, and then there is everything else.  I think there seemed to be reasonable consensus that those are two different kinds of problems.

     The good news is, the ones for which there are priors are much more straightforward to try to deal with.  An example that I think is worth evaluation is the system that the vaccine safety data link has been using for the last year, of looking for say ten or 12 adverse vaccine events for each new vaccine that comes along.  The list of things they are looking at is different for each vaccine, but it is a not-bad model to think about as something that might be adaptable for drugs.  Every week a set of health plans submits summary data that can be aggregated to look for a signal for the things that are a concern.

     MR. SHUREN:  I don't know if Miles or Gerald have a thought on that.

     MR. DAL PAN:  We had a good discussion on that.  I think this issue with the prior probabilities, the things you are worried about for a particular drug, is a good idea.  I still think though that the designated medical event list is something we are always concerned about, so I think it would be useful and I think feasible too to be able to look at the agranular cytosis and other kinds of events as well.

     MR. PLATT:  I'm sorry if I managed not to be clear about that.  The things that have been responsible for the majority of drug withdrawals I think should probably always be on the list.  Renal failure, hepatic failure, those things it seems to me are constants.  It is silly not to look for them.  Then there are things that are specific to the individual agent.

     MR. BRAUN:  We fully support that.  In fact, we collaborate with CDC on the vaccine safety data link and with the HMOs that are part of that.  So we definitely subscribe to that approach.   

     In fact, for pandemic influenza, we got some funds to try to do further safety assessments for a potential pandemic flu vaccine.  We are working with CDC and with Rich's group to try to enhance the vaccine safety data link, for the very reasons I think we have been talking about today, that you even need greater numbers of people under observation to try to get at some of the safety issues. 

     The specific one is Guillain Barre syndrome after flu vaccine.  Some of you may recall, with the swine flu vaccine that shut down the program about 30 years ago.  So we are sensitive to that past experience.  We voted with our resources to try to do exactly the kind of thing that we are talking about today.  We are hoping that will be successful.  It is going on right now.  It is too early to say right now, but I think it is the way to go, and to pick a focus group of adverse events to work on rather than do an across the board search for hundreds or even thousands of adverse events.  That just becomes too -- it is not feasible for practical and technical reasons.

     So yes, I support this approach.  I think it is a good one.

     MR. SHUREN:  Let me ask, that is the what, and you could say the where.  One of the things that was mentioned was that sources of data need to be identified.  Two prongs there.  The first was, who has it.  Hopefully what we are doing today and through the public docket is an effort to help identify that.

     The second was to then do proof of concept testing, where we give out a case.  Maybe it is a case we knew about before, and maybe it is something new, something that has come out recently.  This would be a high target, and see what folks actually show.

     Is it the sense of the speakers that this is a critical next step before moving forward, to do that proof of concept testing for various data sets before engaging in further activities?  Or are there known quantities, that there are already systems there that could be tapped into? 

     MR. CALDWELL:  It seems to me that there are some parallel steps here that could be taken.  I don't know that you need to necessarily sequence them, because they are not mutually exclusive.

     MS. TRONTELL:  I won't presume to know, but just based on yesterday's presentations we saw many excellent demonstrations of good data systems.  I think if there were a pilot to be done, I would like to see a pilot of putting at least two of them together.  That is part of what we are trying to talk about, to network. 

     We talked this morning, there was probably some federated model. It might be nice to do some proofs of concepts to see how that might work with different kinds of data systems, maybe a couple of permutations of some of what we heard.

     MR. BRAUN:  I think one of the issues that came out in the discussion a few minutes ago or over lunch was that these kind of efforts cost money.  There is no way around that.  Some kind of support is necessary to undertake this work.  So any kind of proof of concept or test or demonstration of ability and skills to do the work could be part of pre-funding process.  That would be built into that. 

     In my opinion, that would be the way to expedite this.  It is a necessary preliminary step, but if they could be telescoped somewhat, the steps, it would help implement sooner the system.

     MR. CALIFF:  Another element of the test cases that we talked about was that if there was agreement among the set of experts in surveillance of medical products or postmarketing evaluation, whatever you want to call it, having the test cases would enable people as they were designing their new systems to have something to look to, could our new systems solve this problem.

     Every health system in the country has just bought an electronic health record, and they are all going to take some period of time to develop.  So tuning those to be able to pick up things that you would think would be important for the people in your system would be good.  It would be a good thing, we thought.

     MS. CRONIN:  If there was some clarity around what the top or two priorities were, and perhaps the top events to be interested in, that would help you determine the data requirements.  Then you could be looking across, see what you get through the public process, what you have heard about yesterday and today, to match your requirements for what is most important to you over the next few years through those data sources that are available.

     I think in terms of having a network pilot, there is an opportunity over the next few years to do that through the NHIN trial implementations, because there truly will be regions that will be networked.  It is part of what they will be asked to do, to share data from one region to another.  But given that electronic health records and the status of some of these operational health information exchanges are still in their early days, with the exception of Mark and Ken and a few others, the quality of the data that is being exchanged is still in question.

     So I think it does get back to what are your requirements for the most important questions that you want to pose in the near term, and then match those to the data sources that you know about, including the ones you already have access to, which is quite a long list.

     MR. CALIFF:  This has been an interesting day for me.  I am thinking about some things in a way that I hadn't thought about them before. 

     To pick on two examples from the drug and prosthetic side, if you took HDL raising drugs, easy to pick on now since the big one bit the dust, but a very common problem.  It is likely that the ones that make it might cause hypertension.  For the five things that you always look for, hepatotoxicity and agranular cytosis, broad health system data sources would probably be useful.  But looking for strokes, if you knew that was something that you wanted to look out for, you would probably want to go to a different kind of a data system, a professional society or hospital based, where you honed in on that definition.  That gets at what Rich was bringing up about identifying ahead of time.

     Another thing that is happening now is biological implants and protheses.  If you are looking at knee replacements, you are probably going to want to look at knee joints, and with biological implants, things like infections are going to be important over time.  So you can imagine the types of networks that you want to look at might even require more than one type for different outcomes.

     MR. SHUREN:  This goes a little bit back to a variation of something Gerald had talked about beforehand, that sometimes if you are looking a little bit more broadly or if it is common stuff and it is for a drug, you may look one way, if it is for a particular adverse event in focus.  If it is one of the constants, it may be a larger system, but if it is something a little bit more unique, or even one of the constants, where you need some more granularity, a much more focused network.  We know we have that for some areas.  DILLEN comes to mind as one model.

     Let me follow up on something Kelly had said with the RIOs as a test bed.  If you come up with the case, here is the test as put forward to us, if you could give us that and use that for various systems to cut their teeth on and see if they make the grade. 

     We put that out, and some of the RIOs made the grade, and that might be one source.  We keep hearing about, you have to look to different sources.  The 800-pound gorilla that Miles just raised is, this is great, but you need the dollars to support it.  What do you think if the RIOs turned out to be a good opportunity for us?  Where do you see the funding source potentially coming from?  Is there a larger spotlight that we can focus?

     MS. CRONIN:  Well, certainly in our budget requests we have more dollars going towards NHIN trial implementations.  If there were an adverse event use case developed, we could expect to see a lot of regions if they are mature enough already to exchange data for that purpose doing those kinds of trial implementations and making that data available to FDA.

     But if you look at the first round of the four prototypes of the NHIN and the cost and revenue models that were developed, the four consortia that were involved in that ended up relying pretty heavily on assumptions that their revenues over time were going to come from secondary uses of data, meaning data being sold to -- it could be an academic research group, it could be a pharmaceutical company, it could be a quality organization that needs to aggregate data for quality measure and reporting, or various other public health entities.

     So if we do proceed and the data quality is such that those kinds of activities can happen over the next few years, then there might be various sources of revenues.  I think HHS is committed to trying to do what we can to exploit the NHIN, but it is really a public-private partnership.  So much is happening in a local, regional and state level, in terms of everyone contributing to the deployment to this and to getting the governance right and the policy development.  There are a lot of different activities to pull all together.

     I think there are some short term opportunities.  It is a matter of whether or not we can advance adverse event reporting or detection as one of the top priorities in the next year so that the federal dollars will follow that as a starting point.  Then as the activities become more mature, there will likely be other revenue sources over time.

     But I think again, there are already so many other databases out there that are really rich.  When you think about the unique requirements for device surveillance, the registries that we heard about yesterday, Kaiser and other entities, that had such an immediate opportunity that the NHIN or RIOs are not going to provide.  You can't do unique identification.

     So it does make a lot of sense for there to be the priority surveillance questions clearly identified in the near term, that would then direct the resources that you do have through the various centers, and perhaps just building off of what you are already doing in your normal followup from the signal detections that are coming in through Medwatch and other systems.

     But I think that in order to reach this goal of having the NHIN serve public health surveillance more broadly, which we intend to do through CDC and all our public health partners, we do need to lay the groundwork.  It is a foundational time, where we want to make sure that the public health priorities are incorporated.

     MR. SHUREN:  What we have talked about so far is on signal detection.  I know number two in terms of the priorities list was to be able to quickly test hypotheses.  I see signal detection as hypothesis generation, and now it is about hypothesis testing.

     For that second priority to quickly test, validate, are there more specifics you can provide?  What do you envision for having that kind of capability to quickly test?

     MR. CALIFF:  We talked about three things.  Part of it is to build the networks, because part of validation is independently showing that the same signal exists.

     The second key thing was methods, which we had quite a lively discussion about methods, and I don't think there was consensus about exactly where we are with the methods. What we all agree on is that there is going to be an almost infinite amount of data from which an equally infinite number of random false positives could be generated on side effects of drugs and devices.

     So the methods are going to require considerable work.  You have got to have the work force, you have got to have people who can do it.  I think the ultimate validation that it is a real issue due to a drug or device is more than just validating that the signal can be repeated, because you can have an association which is very repeatable, but not actually a toxicity of the drug or device.  That requires other types of investigation that need to be launched, the kinds of things that companies now do on their own, which they should, when they think they see a signal to understand the biology better

     Again, I think the stent example as a good one yesterday.  You have got a bunch of vascular biologists trying to understand what happens with the endothelium, is there an animal model that comes anywhere close to the human.  You have got quality people getting involved, because how much of it is just due to poor implantation of stents by operators.  You have got more clinical epidemiology to do in addition.  So you need a potential action arm of the network in addition to just the signal generation.

     MR. PLATT:  I suppose an additional thing to be alert to is while economies of scale and the ability to use evolving information technology can transform the signal detection piece, I think that the confirmation part of those signals is going to be much like what we have been living with for the last ten years, an evolving but pretty mature field of pharmaco epidemiology or vaccine epidemiology or even device epidemiology.

     Rob is right, there are limits to how far you can go in understanding causality.  One of the very big limitations at the moment is partly data.  I think if we build the right infrastructure for signal detection, we will be a giant step ahead on having the data available for the confirmatory work.  But it will always be expensive.  That is, the confirmation will always be -- it will usually be millions of dollars per signal to be confirmed, whereas it is likely not to be anything like that for each signal you search for.  That is because you are going to need a substantial number of well trained individuals devoting a substantial amount of time to doing fairly detailed protocol development and implementation, and most of that is going to be unique to the specific signal of drug in the population, and a fair amount of record review to understand what is going on, and maybe interviewing the clinicians, and maybe even talking to the individuals themselves.  Every one of those confirmatory studies that I have seen has been a fairly expensive proposition.

     Gerald was appropriately pointing out that those won't be instantaneous.  Now they take years, and with appropriate development and resources they might take a year or maybe eight months, but they won't be two or three months. 

     MS. TRONTELL:  I agree, Rich.  I wanted to ask, if I could press you a little bit, might there be some aspects of these analyses that we won't have to do de novo?  It gets back to some of the methodologic issues.  We may have reasonable confidence in our ability to detect MIs using administrative data.  We don't need to reinvent that.  That could be something that could be done in a confirmatory fashion relatively quickly.  Additional confirmation if you want to go to medical records could or could not be done, depending on your confidence.

     But might we ultimately develop methodologic libraries that we could reuse, particularly as these data systems develop?  It would be really nice to take advantage of what others have done, rather than to reinvent it.

     MR. PLATT:  Sure.  I think that much of the benefit that will accrue will be on the signal detection side.  We will be much better at generating a signal that really does mean excessive MIs.

     But this issue that Rob is pointing out of knowing whether it is really causal is going to be harder to do.  I'm not saying that we can't do better.  There are new methods that are -- surprisingly, at least to me, new and better methods are showing up, but I think for every signal that you are seriously worried about, there is going to be a substantial amount of thinking and time to develop the appropriate epidemiologic study to try and sort out the causality piece.

     MR. SHUREN:  You have mentioned methods.  There has been a lot of talk about methods and the great need for -- there are things in the works and there are new things being developed, but there are still great needs for methods development and for testing.

     What do you all maybe see as the current obstacles for getting there?  Is it an issue of money and bringing people together?  Is it just that it is going to take time?  What do you think?  What are the things that the federal government can do to help?

     MR. CALIFF:  What is the funding agency for people to spend time developing methods in this area?  AHRQ is truing valiantly with a tiny budget, and CERTS is probably the biggest group doing it, but it is a very small amount of money.  So even though you might argue the goal of academics may be pure, in general academics don't work for free. 

     I don't think FDA has a big budget to fund methodologic research.  So I would say there is a big impediment now.  There is not a large effort going into funding methodologic development.

     DR. TRONTELL:  Rob, I think you looked with great optimism to the CTSAs.  Is there any way this could be construed part of that funding stream?  Admittedly AHRQ is trying to do it in a number of areas, but how might the CTSAs be served?

     MR. CALIFF:  At some risk of being accused of trying to divert CTSA money, every one of the funded universities has a bioinformatics and a biostatistics corps, and a training program in clinical research.  So that is definitely an angle.

     Barbara Alving, who is currently the interim head of NCR that is funding it, is on the FDA Science Board and used to work with the FDA.  I think she has a sub-acute if not an acute understanding of the need. 

     MR. SHUREN:  I'll be sending her a dinner invitation next week.

     MR. CALIFF:  I wouldn't bet on the deans of medical schools that this is a top priority, but they tend to go where the funding is.

     MR. SHUREN:  Do you think there are any issues in terms of -- as we continue to find that there are always pockets of activity, that some groups are working on a particular project, and it turns out there is another group that is working on something very similar.  As we reach out, we hear in the case of registries that there is more than one entity that is trying to set up very much the same registry.  When we raise it, they go, we didn't know these are the folks who are working on it.

     Are we dealing with that same issue in methods development?  A lot of smart people working in a lot of silos?

     MS. CRONIN:  This is not an area of expertise, but I do recall roughly four years ago, there was a pretty organized effort between FDA biostatisticians and industry folks in coming up with some good methods on signal detection.  I think that is a model that perhaps could be built on.

     MR. SHUREN:  And it is something that is still going on now.

     MR. BRAUN:  In my opinion, one of the best ways to develop methods is to be working on a real problem.  That is when we get the most motivated people who really have the skills, when it is an academic exercise, although those can be fruitful.

     I think the advances are in the people who have the experience with the data sets, where the big advances have come because there was a problem that needed -- you needed to get groups together because the sample was too small to study in any one data set.  They had to get together and to have common data definitions and so forth, and collaborate that way.

     So I can advocate for some kind of pilot or test case, with a real problem that we are concerned about that people can rally around, rather than an abstract exercise.  We have plenty of real problems to work on. 

     MR. SHUREN:  You are all -- the folks who have been invited here are all experts who all have access to data.  You all have been living in that world and using it.  What are the challenges you face in your efforts for using that information in your systems for safety efforts?  Beyond funding.  We all know funding.

     MR. CALIFF:  It is hard to get away from the funding, because the fundamental problem is, the infrastructure isn't in place to do what needs to be done.  That is true at almost every level.  Everyone at this table I think in one way or another is struggling to put infrastructure in place as quickly as they possibly can.  But if you want to create boredom in scientific circles, talk about infrastructure for informatics.  If you want to create fright in health systems, talk about spending money on this.  It comes directly off the bottom line of health systems when they do it.

     The other part of infrastructure is people.  I think the FDA -- almost your entire work force is here today in this arena.  It is a pretty slim number of people.  Then you go out into the academic centers; how many medical schools have people that are even thinking about these problems.  It would probably surprise a lot of people in the public to know that medical schools in general are particularly concerned about this as a primary issue.

     So to me, so many of the problems at this point are logistic and infrastructure.  A lot of the rest of us would move on pretty quickly if we can put the building blocks in place that were talked about today.

     MR. HILL:  We have a variety of problems we would share with others.  I think the first one we have already addressed, and that is knowing what the events are so that we can appropriately construct the queries.  So there may be some design issues there for our particular data set, and it may be different among others.

     The other one, independent of the financial resources, there is a human resources side of that, do we have and can we hire the right kind of people.  A few of us have talked about some options for that.  We might want to look at ways we might mate, for lack of a better word, or join the resources that are in graduate programs, wanting to learn these skills and techniques with the data sets that are there that need some help, with the proper mentoring, of course, both at the university and at the resource level.  So I think that is how we can dip into some of those resource issues.

     Finally, in our case where we have a number of priorities that drives why we are doing what we are doing, in our case we are supporting the needs of the medical groups who are contributing their data.  We have to serve them first.  Help us to convince them that this is of great value to them, alongside the list of ten or 15 other things they are asking us to do for them with their data.

     So there will be, for lack of a better term, a business case for our stakeholders who own that data.  That is why they are doing it, and they don't want us to be terribly distracted, although it will help their patients.  But there are about another 15 or 20 things that will also help their patients that we are working on.

     MS. PAXTON:  In the medical devices safety area, we face some additional issues.  For example, being able to identify that the device that is implanted in an individual.

     There are a couple of areas that we could really benefit and could use the FDA's assistance.  First of all is standardization of bar codes.  That would certain help as we progress and pilot different methods for capturing that information.

     In addition, a central repository that has a description of those devices that we could link to catalog or manufacturer numbers would be very beneficial for existing registries as well as potential ones. 

     MR. SHUREN:  You mentioned standardization of bar codes.  Elaborate.

     MS. PAXTON:  The bar codes are currently in numeric format, so it makes it difficult in terms of extracting information from those bar codes.  If we had the key for each manufacturer in terms of what the bar code number contains, that would help us tremendously as well, being able to link those numbers to a description of that particular implant. 

     MR. GROSS:  That is a theme you have heard for the past couple of days, the need for a unique device identifier.  FDA I would say is taking the lead in that area.  We have had some workshops.  We put out Federal Register notices for comments, and we have gotten some very useful comments as to what would constitute a unique device identifier, what are the attributes of the device we are interested in, for instance, certainly the manufacturers that make the model, and all the way down for appropriate devices to the lot number, serial number and so on and so forth.

     So the first step is to get that group of attributes that we think generally speaking would capture most of the devices.  The other step is the readers of that information.  If it is presented in bar code fashion or in some other fashion, what is the appropriate technology that would read that information, put it in a database that could translate that into something that you would understand.

     All I can tell you is that that is a very high priority for the center and I think for the agency, because the need is so great.  We have heard that repeatedly. 

 

MR. HILL:  At the risk of sounding like a horse trader, there was one other thing I forgot to mention.  We all are either resource or financially constrained, but we have things we can trade.  Our medical groups, although they have very rich data, there are gaps in their data.  There are gaps on the claims side, not what they file, but what they get back, and there are some gaps on the pharmacy claims side as well.  They order it, some of it get it back into the EMR and whatnot.

     Might there be some way that we share our data with CMS and the FDA in return for getting some of that data that you hold, not report it to us, but report it back to the physicians where it belongs, so that they can better treat their patients, and then that gets back into our data warehouse through their system.  We can identify the groups, we can identify the physicians, and we ought to be able to somehow match up the rest of that data.  So that would be a tremendous impact for us, and it would certainly be a motive and it would help us to deliver better care as well as to provide you with the event reporting which you might need. 

     MR. PLATT:  None of what I will say is new.  I subscribe to the notion that if there were more resources you could have a robust system for drugs in a very short period of time.

     I think non-financial things that will make a big difference either sooner or later are increased clarity about the public health use of private health data and the ability to access medical record information without going through the usual IRB kinds of clearances would be very important.

     I think if CMS were to start moving to require the unique device identifiers to be submitted as claims when they become available, that will have a huge effect on the ability to do good device safety epidemiology. 

     MR. SHUREN:  Let me follow up on something you had raised, Jeff, which was the non-financial, maybe even some of it tangible, maybe intangibles, about what the government could offer in return for, whether it be data or expertise.

     One was, government, you have access to data and that might be useful.  Are there other things that we could provide? 

     MR. HILL:  Those are two pretty good ones.  We already put money off the table, right?  I can't think of anything.  Certainly helping the public to understand the purpose of this to alleviate some of their fears on what is known as repurposing of data, which is a misused term in many cases.  I think some of us go to great lengths to make sure that data is protected, but there is always an example or two in the news where something happens.

     I think in terms of what Rob said about the general concern about the FDA machine and the bad press you have had of late, to help clean that up and put this in the prospect of, this is one of the ways we are improving the way we work with the private sector and the public.  We talked a little bit about that at lunch as well.  So I think that is another intangible that is a big ticket item intangible.  So I think a public relations campaign.

     Wouldn't it be wonderful for our pharmaceutical friends in the audience if we could find a way for them to team up with FDA to help improve the whole adverse event reporting system, where maybe we would get some tangible and intangible assets on the table that way.  That would make a wonderful story for the country as well.

     So since you have prodded me, those are some things I might not have said otherwise. 

     MR. SHUREN:  We have talked a lot about the data.  There is also that aspect in terms of analysis.  What do you view in some future network in terms of who in fact should be looking at the data and deciding, if there is a signal, is it real, does it merit further evaluation for confirmation?  What kind of a model might you all have in mind? 

     MR. PLATT:  It is going to need a couple of kinds of expertise.  It seems to me the default model will have some professional epidemiologists, some of whom live in FDA and need to be part of the system.  Others have to be epidemiologists who are attached to the sources of the data themselves.  Most of the kinds of data we are talking about have interesting anomalies that take a long time to discover if you are not familiar with the data.

     So it is going to have to be a marriage of people who are familiar with the data and using the particular data sources for the ese purposes, together with epidemiologists whose major job is to think about drug, biologic and device safety. 

     I'm not optimistic about the notion that the data sources will be there for any interested user to evaluate, though.  I think it is unlikely that many of the private sources that will be so important are going to be willing to have their data be a public use data set.  They would want them to be used under certain kinds of conditions that would probably limit the individuals who would have access, at least to the full data set, which I think would be important as a starting place.  It may be possible to make public use data sets that speak to specific kinds of questions once there is a real question to be able to create a data set that anyone can look at.  I think that is not likely to be a general model for all of the signal detection hypothesis confirmation that we have been envisioning. 

     MR. BRAUN:  If there were a mechanism to be able to share the data with the FDA or other interested government agencies, but that would allow the data not to become public as a result of that, either through Freedom of Information Act or some other way, I think that might allow the FDA for example to be able to have access.

     MR. PLATT:  Sure.  This issue of access, it becomes a loaded word.  The model that will likely be more successful is when FDA proposes collaboration rather than a pure access model. 

     MR. BRAUN:  Actually, that is a good point.  I think that would be implicit.  As you pointed out, because of the quirks of the data and anomalies, someone would think they could just use it the way you can rent a car, get the keys and get in and drive off.  If you did that with these data, you would drive off a cliff.  So I think we recognize that.  Thanks for pointing that out.

     MR. PLATT:  And worse, it is a tank that you have the keys to.  It is not just that you can drive off a cliff, but you can knock down a whole city with it. 

     MR. CALIFF:  It would be interesting to hear at this point from somebody that represents a company that makes and sells medical products, because what you are describing is a process by which someone other than those companies has the data. 

     There is some process by which you decide whether a signal related to a product, a potential signal, is actually a signal.  That is not the way it happens right now, for the most part. 

     MR. SHUREN:  With that, that might be a good opportunity to ask folks in the audience to weigh in. 

     MR. KRALL:  I can't resist the invitation.  I am probably going to respond to several things at the same time.  It is unusual for me to sit in a meeting like this and hear so much discussion about the need for funding and not hear pleas to the industry to be the source of that funding.  But it does strike me that it is kind of obvious that the industry does have a role to play in helping at least to fund an initiative like this.

     You heard from a number of members of companies that we have as much interest as anybody in the safety of our medicines and in the effectiveness of our medicines.  Some of us are making investments to get where you have been talking about.  Most of us see the real investment is quite large.  It doesn't necessarily have much credibility if it is undertaken within our companies.

     There is an opportunity sitting there for something that would be more critical, that would be more cost effective, that would have transparency of information that could be built.

     So what I would put out on the table is to think about the fact that there is money as one of the resources that would be required, probably accessible from the industries that make these products, and then the question is, what is the benefit that comes to the makers of products that would make it so that they would be prepared to actually contribute those funds.

     I think that the benefits come from creating symmetry and information.  That is, we know what you know, and there is no position where we find ourselves therefore at a disadvantage.  The opportunity to contribute what we know into the data about interpretation, so we almost always know more than this particular system would know.

     I'll give you an example.  We create a pharmaco vigilant system that is able to do signal detection around the designated medical events.  Let's just suppose we are able to do that, and I think there is quite a bit of evidence that we could.  We would still bring to the table lots of knowledge as Rob said earlier about the biology, about the experience with that medicine in other environments other than the United States, about all kinds of things, the manufacturing that comes into the interpretation of those signals.

     There is expertise within our companies of the kind that we are talking about, whether it is epidemiology or information, technology or statistical methodology.  All of those could be contributing. 

     One of the things that I would put on the table is thinking about a mechanism that puts industry, academia, regulators, holders of the data in some kind of organization where all of us get the benefit of what we need. 

     MR. SHUREN:  Since Adrian Thomas from J&J raised funding yesterday, I know one of the sensitivities is, for a company that data may have regulatory implications.  So one question is, the closer that a sponsor may get to the process, the more concerns that are sometimes raised about the integrity of that process.

     On the point about some expectation with industry, what about from the standpoint of, would industry -- and I know you can't speak on the entire industry, but would be willing to pay funds, and maybe when there are questions regarding the product, whether it be the manufacturing or providing other information that they may have available, but then stepping back from some of the real analysis and decision making that goes on? 

     MR. KRALL:  Well, as you said, I can't speak for the industry, I can speak for myself with some reservation.  I don't think we want to step back from analysis and signal detection in general.  What we do want to do is to help to develop accepted methods for that, that we can be comfortable are going to be as robust as possible, that we can be comfortable we can participate in the validation of those signals, and ultimately when a signal has been generated, participate as we do today with the regulator in interpreting those, deciding what the appropriate public health action is, what further investigation should be undertaken.

     I think where we get uncomfortable, where I get uncomfortable, is when I am aware that there is some ability to learn something about one of our medicines or vaccines that we don't have access to, because I feel we have a responsibility to be able to have access to those.

     I have painted this picture elsewhere, but one possibility here is that we all start building these systems, because we have an obligation to -- Glaxo SmithKline, Johnson & Johnson, all the companies and the regulators and the owners of health care databases.  That seems crazy to me.

     MR. WILKOFF:  Just a couple of thoughts.  One was that the problem is not necessarily the signal detection and analysis afterwards.  With some trepidation I say this.  The FDA for instance did have all the reports of the safety alerted devices, the defibrillators that came out, they were in the MOD database.  It was well know, the signal was there.  We had not the ability to work with that.  The companies had it.

     The question is, when do you initiate the analysis.  If we are unable to determine the signal, I say this with some trepidation, maybe that system is not the system.  Maybe that money needs to be redeployed in terms of the analysis that we are working on here.

     I was just going to echo what was just said.  It seems to me that there is plenty of money and effort that is put into the analysis of the system once the signal is identified by the company, the industry or whatever else like that.  They try to understand with extraordinary detail what is going on.  When you reveal that to everybody is another issue, that is a communication issue, but the fact is, there is a lot of money put into that other part. 

     My question is, where does the MOD database fit into this in terms of devices and in terms of searchability, getting data out of that.  I'm not trying to make anybody uncomfortable, but it seems like an obvious question. 

     MR. GROSS:  I'm not uncomfortable with that issue.  We recognize the problems with the MOD database.  It is undergoing significant review, as is the ARES database.  So there are issues with the way we have done passive surveillance and the systems that capture that information.       Those need to be corrected.  They cost some money, but they have to be corrected in light of what we are trying to do here as well, because they are complementary efforts.

     So we are very well aware of some of the system issues, and we are trying to address them.

     MR. SHUREN:  And we will be passing out a tin cup later in the meeting.

     MR. IBARA:  Mike Ibara, safety and risk management at Pfizer.  A couple of comments pertinent to our discussion here.  I would like to situate them in terms of the data collection or signal generation portion of looking at safety and the hypothesis testing and analysis portion of it.

     Certainly in the pharmaceutical industry we are very much aware of the daily issues.  We look at the data on a daily basis, and we understand the limitations of the data and the quality.  Focusing on the front end for a minute, many of us have come to the understanding that we are a highly regulated industry, so when you look at the pharmaceutical industry and the claims data task that we do and what we need to do, there really isn't that much room for creativity.  The steps that we follow in our processes are very much similar across the companies.

     One of the areas that we have been discussing informally among some companies is the idea of collaborating in a consortium to situate that data collection portion in an independent organization.  We certainly don't need to own the data in the sense of being the only people to see this information, and we don't now.  But we realize that the health care system has gotten to the point of complexity, to the point where we are all trying to recreate the wheel in some sense.  And certainly that is true on the front end of collecting the data.

     When we talk about the back end, and you mentioned the idea of having companies contribute from funds and backing away from the analyses, I think the situation now is that there are very many independent analyses of drug safety signals.  Pharmaceutical companies are certainly not the only people that attempt to do this.  But I think we are involved now, and we certainly would want to be involved in this, since the outcome of determining that signal is such an important outcome for the public and certainly for the companies.

     I very much like the comments that were made earlier about the idea of a collaboration.  In terms of evaluating signals, I think that we would certainly want to be part of a collaboration.  There is no need to be a single source of the understanding of if this is a signal or not, but because of the expertise that would like within our companies, and because of the impact that it could have on us, I think we would not need to take the lead role necessarily, but we certainly want to be at the table as part of a collaboration and understanding whether this is a signal or not.

     MS. STAFFA:  I am Judy Staffa.  I am an epidemiologist at the Center for Drugs at FDA.  I guess I would just like to echo Miles' request.  If we are going to try to test out these systems by doing these proof of concepts, that we do that in a prospective manner.  I think we have seen enough retrospective looks at data, going back and replicating Vioxxes.  I think what we need to do at this point is to move forward and really see if we are going to do this, how do we do this in real time with real examples and real signals, and see how these systems perform. 

     Knowing how many signals they generate at any one point in time I think is a really important thing for FDA to understand as well as for these systems to understand, to be able to resource them adequately if we are able to move ahead.

     So I really think the proof of concept is a great idea.  I would just like to make a plea that it be prospective and in real time rather than retrospective looks at the data. 

     MR. MORRIS:  A couple of things.  I don't think you should exclude industry, their being the sponsors of the pharmaceutical companies who have a vested interest in understanding their products.  And I would not exclude the positive benefits or the positive outcomes, not just the negative adverse events or safety signals, but the opportunity to look at that population based data, to look at relative benefits in populations, where maybe that wasn't necessarily studied originally.

     I would also be careful about applying the same methodologies that are used in house at the FDA today in a randomized controlled prospective study for a phase III or IIIB NDA and a submission, to what goes on in population based, messy, dirty, incomplete data.  It is a different beast.

     I think where FDA can step up is looking at the methods, and saying here are things that we as industry, third parties, the FDA to say here is what we can do with population data, here is what is accepted, here are things that are a bit out on the fringe.  It does not stand up, and it can't take the same rules, the same approach, even the same tools which are used when you are running the phase III data that comes in in PDF and electronic submissions.

     MS. WEST:  In 2002 I was very fortunate to teach a class in pharmaco epidemiology, where Anne Trontell came from the FDA at that point in time, and first introduced me to designated medical events.  It was the first time that I knew there was a compilation that FDA had.

     After she lectured on that, UNC is a CERT, and I was involved in a project where we were trying to use a database to identify adverse drug events.  So I was trying to pick low-hanging fruit.  So I went to the designated medical events to see which ones might actually have a code list that I could use.

     I e-mailed a variety of different pharmaco epidemiologists and was able to come up with a code list from Brian Strom, I think we had sudden death and we may have had renal disease.  But at that time, and this is about 2003, 2004, as far as I knew, there wasn't a code list based on ICD-9 codes that could be used for claims.

     I don't know if that exists today.  I had talked to Marc Overhage about it.  Rich, I don't know if you have a code list for all of the designated medical events that could be used in claims, but I certainly think that that could be a tool box issue that we can work on.

     It is not only developing that compilation of codes, it is validating them, so that each of us would be using those same codes as we move forward with this process.  We talked a lot about designated medical events, but if we don't have definitions for them, what good are they.

     The second thing that I wanted to mention -- and Fran, I am really glad that you here, because one of my students is particularly interested in a proof of concept study, looking at liver toxicity in the VA.  One of the things that we will need to do is develop a definition for liver toxicity.  Then what we were hoping to do -- and I think this is really in line with what we are talking about here -- is coming up with a drug that came off the market for liver toxicity and seeing if we can go back and find that in the VA database.

     Now, one of the main points that I am trying to make here is not only that it is a proof of concept, but this is a graduate student project.  Maybe we are getting a little in over our heads, but this is a way of perhaps funding some of this research.  It can be done cheaply, just by paying for graduate students to do this sort of thing.

     The third point that I am making goes along with the device standards, the bar codes.  NDC codes are horrible for doing claims based work.  One of the problems that we face at UNC is, I can't afford First Databank.  I think it costs like $200,000 a year.  So is there a way of making that sort of database available so that we can be doing claims based database work, Medicaid, whatever, and that would keep the costs down as well.

     Thanks.

     MR. SHUREN:  I think my sense is that things are winding down, so with that in mind, let me turn to all the panelists and ask if any folks have any closing remarks or comments. 

     MR. HILL:  I saved one.

     MR. SHUREN:  You saved one, just in case.

     MR. HILL:  We talked a lot in the last couple of days about the precedents and parallel universes and the like.  My main reason for existing at NSAT within the AMGA is to help our members improve the quality of the care they deliver.  So we have a lot of interactions with payors, who also are interested in the quality of care we deliver, and sometimes the administration and the reporting and the performance regarding those things.  In fact, many of them either are or they are proposing a pay for performance program which I mentioned yesterday.  We prefer to think of it as pay for quality rather than process.

     I am wondering, because we have ourselves talked about, wouldn't it be better for us to be working together with the payors to assess quality and analyze quality and report quality, rather than for each of us to do it individually, like pharma might be doing.  We don't believe what you are showing on a drug, so we have to do it ourselves, and vice versa.  We have said the same things.

     I think we are a little bit further along on collaborating, and might that be a precedent.  We are also interested in the safety of our patients, which relates not only to the care but to the medical products that we use.

     So I am seeing a lot of parallels there.  I am going to go back and think a bit about even some better comparisons that might help to fuel the collaborative nature of that.  But I see no reason for looking at safety collaboration to be any different than quality collaboration.  It all gets down to reporting and accountability.  So it is really the same thing.

     MR. SHUREN:  With that, let me thank everyone for participating.  You have given us all a lot to think about.  We are going to take what we heard today, we are going to take the comments we received to our dockets, and we will start piecing together those next steps, starting to lay out a road map of where we need to go with the sentinel network.

     Again, the docket closes on April 5, so please, if you have any comments to provide to us, please do so and submit them.  Materials from this meeting's presentations and the list will be up on the website by tomorrow, and we will get the transcript out for this meeting in the next few weeks.

     Again, thank you all for coming.

 

(Whereupon, the meeting was adjourned at 3:05 p.m.)