• Decrease font size
  • Return font size to normal
  • Increase font size
U.S. Department of Health and Human Services

Safety

  • Print
  • Share
  • E-mail

Sentinel Network Public Meeting - March 7, 2007

FOOD AND DRUG ADMINISTRATION

SENTINEL NETWORK PUBLIC MEETING

March 7, 2007

University System of Maryland
Shady Grove Center
8630 Gudelsky Drive
Rockville, Maryland

Proceedings by:
CASET Associates, Ltd.
10201 Lee Highway
Fairfax, Virginia  22030
(703)352-0091

[PDF Version]

Table of Contents

 

Welcome and Opening Remarks - Jeffrey Shuren,
  Andrew von Eschenbach

The Vision of the Sentinel Network - Janet Woodcock 

Meeting Format - Jeffrey Shuren

Presentation by Jeffrey Hill

Presentation by Ken Mandl

Presentation by Marc Overhage 

Presentation by Rich Platt

Presentation by Hugh Tilson

Presentation by Christopher Chute 

Presentation by Barb Rudolph  

Presentation by Michael Caldwell   

Presentation by Liz Paxton 

Presentation by Fred Resnic 

Presentation by Alan Menius 

Presentation by Alexander Ruggieri  

Presentation by Dwight Reynolds                     

Presentation by Stephen Goldman                    

Presentation by Michael Lieberman  

Presentation by Adrian Thomas

Presentation by John Rothman 

Presentation by Jonathan Seltzer, Ben Bluml

Presentation by Donald Hackett  

Presentation by Hugo Stephenson

Presentation by Marta Villarraga, Duane Steffey 

Presentation by Ed Helton

Presentation by Mike Ibara 

Presentation by Frederick Rickles 

Presentation by Jon Morris

Presentation by Eric Sacks

Presentation by Alex Frost  

Presentation by Harry Fisher  

Presentation by Denise Love 



P R O C E E D I N G S

Agenda Item:  Welcome and Opening Remarks

DR. SHUREN :  I’m Jeff Shuren.  I’m the assistant commissioner for policy at the FDA.  I would like to invite all of you to the Sentinel Network public meeting to promote medical products safety.

I am delighted that so many people could attend, particularly in light of the weather.  I think folks know that when there is even a threat of snow in Washington, whether it be to you or a family member that doesn’t even live here, you can shut down anything.  In fact, that is happening.  There are schools closing all over the place.  They were closing yesterday, even though there was no precipitation.  So thank you for coming.
On behalf of all of us in the federal government, we want to thank you for participating and we look forward to a fairly dynamic meeting.

I would like to begin today’s meeting by introducing the Commissioner of Food and Drugs, Dr. Andrew von Eschenbach, and the deputy commissioner and chief medical officer of FDA, Dr. Janet Woodcock.  Dr. von Eschenbach is going to be giving opening remarks, followed by Dr. Woodcock, who will describe the vision of the Sentinel Network.

After those presentations, I will review some housekeeping issues -- most importantly, where you can get lunch.  Then I will introduce the federal government panelists and the invited speakers, and we will get started.

Again, welcome.

DR. VON ESCHENBACH :  Thank you, Jeff.  Good morning, ladies and gentlemen.

I want to begin by, first of all, echoing Jeff’s remarks.  The very fact that I see so many of you here, committed and really dedicated to the effort and the purpose that we are here today to discuss -- namely, to create an opportunity to bring us together in a network that will enable us to serve patients and the American public in ways that, perhaps, 5, 10, 15 years ago, we couldn’t even imagine -- is a tremendous testimony to you.  It’s a testimony to your dedication and your commitment.  For that, we are extremely grateful.

It is also gratifying to me because I have a very strong belief in how critically important your effort and your participation is to what I believe is perhaps the most profound transformation to ever occur in the history of medicine.  We, as we begin this 21st century, have an opportunity to radically change not just our health-care system, but our whole concepts of health care and of health itself.  It is because of the tremendous progress that has been made in science and technology that we are able to now begin to discover the very fundamental mechanisms that are responsible for disease, and responsible for health.  That is leading to the creation of interventions, solutions that will enable us to do things that previously seemed impossible.
But one of the most important parts of that discovery and that development is, in fact -- obviously important, because until all of that progress actually touches someone’s life, it really has not found its meaning or its purpose.

But even more importantly, as we look at the implications of what is occurring in science and technology, as we look at the implications of the fact that we are moving from a macroscopic and microscopic perspective to a molecular perspective, what we are beginning to envision and see is that the delivery end of the continuum is not just an opportunity to provide and deliver these solutions, but in the very delivery of those solutions there is the development of new data, new information.  If we have the ability to extract that data, acquire it, assemble it, analyze it, we in fact have an opportunity to rapidly and radically transform the evolution of knowledge -- knowledge of what is happening with regard to disease and health in the human condition, not in the laboratory.

One of the most important, critical factors associated with what is happening when we apply these solutions to patients whose health depends upon them is, are we providing to them solutions that are, in fact, as effective as they were intended to be, and are they as safe as we believed them to be when we developed them and discovered them, in a very confined laboratory or clinical-trial setting?

Ten or 15 years ago, that was a question that perhaps we could not answer, because we didn’t have the tools; we didn’t have the technologies.  Today those tools and those technologies are available to us.  They still need to be further developed.  But we clearly can see enough progress in that regard, and enough progress that has been made in other industries in which the ability to acquire and assemble data has now become routine -- whether it is what is happening with your credit cards and your bank accounts or your cell phones -- now we have the opportunity to do that in health care.  We have the opportunity to bring us together to create a network and a community so that we can begin to acquire the data to allow us to have the information that we can convert into knowledge.

What better way to begin than to do it in the context of understanding safety so that we can protect and enhance the lives of those we are serving?  That is your work today, to begin to come together as a community to discuss the Sentinel Network, a virtual, electronic, integrated medical-products safety system.  It can be one of the most profound contributions to the new era in health care in this nation, and ultimately in the world.

I am very grateful for our partners, partners who have assembled from other parts of the federal government: the Agency for Health Care Research and Quality, the Centers for Disease Control and Prevention, the Centers for Medicare and Medicaid Services, the Department of Defense, the Office of the National Coordinator for Health Technology, the Veterans Administration, the National Institutes of Health.  The important message of not just thanking them, but listing them is to demonstrate to you that we are committed.  We are committed, as federal agencies, to work collaboratively and cooperatively together to bring this new reality to the fore.

We want to collaboratively and cooperatively work with you.  We want to listen and learn from you and understand both the challenges and, more importantly, the opportunities that we together, as a community, can begin to seize and implement.

It is an extremely important challenge.  It is an opportunity that will change the lives of millions of people and save lives and enhance the safety of our medical care system, improve the quality of the products that we are providing to those patients, and, more importantly, continue to assure the good health and well-being of this nation and of the world.

You have important work before you.  It’s a privilege for me to come and welcome you and thank you for that effort.  I wish you well in the venture, on behalf of all those lives that you will be affecting.

Thank you.

 Agenda Item:  Vision of the Sentinel Network

DR. WOODCOCK:  Good morning.  I’m Janet Woodcock.

I am going to talk a little bit about where we see the future going and why we have convened this meeting at this time.

The background on this is something that I think everyone in this room -- although certainly, by all means, not everyone in the nation -- these are truths that we hold to be self-evident, so to speak.  Adverse events due to use or misuse of medical products are very common.  Of course, that part is always in the paper.  Adverse events, as you all know, may result from the inherent properties of a medical product -- in other words, they are what we call the inherent properties -- or a problem with the manufacturing of the product -- we see that more commonly with some sorts of products than with others -- or, probably most commonly, errors or perhaps just slight misuse in prescribing, selection, or use.  That is most common, probably, with the pharmaceutical end of things.

We know that, adding all this up, there is a tremendous toll of harm inflicted on the population, based on the use of medical products.  But it isn’t that anyone is trying to inflict this harm.  We are all trying to do the best we can, each part of the system.  But right now we think health-care practitioners and patients actually do not have all the up-to-date and accurate information about the benefits and risks of the medical products, especially at the point of care, that is needed to use them safely and effectively.  We are not using the information we have now and applying it to the proper care of patients, and we are not collecting the information as rapidly and efficiently as we all would like to.  I think there is agreement on these points.

The benefit/risk profile of medical products evolves over time.  When we first start using a product, whether it’s a device or a vaccine or a drug, we know a certain amount about it.  But over the next decade, or even the next 100 years, we continue to learn more about the products.

The premarket clinical trials that are done, of course, to get products onto the market can’t identify all the potential risks, as we know, or even the future changes in use patterns that are going to occur once that product gets out there.  Therefore, full knowledge of benefit/risk doesn’t exist at the time of product approval, and often not at the time of treatment decision making.  As I said, often even the full weight of information that we actually have is not brought to bear on those treatment decisions, because that information isn’t in the hands of the prescriber in a usable form at the time of the prescribing or use.

As I said, when this information for safe use exists, it’s not always readily available at the time of treatment decision making.  Therefore, timely and effective postmarket surveillance and risk communication -- in other words, communication of the evolving information -- as rapidly as possible are critical to reduce the knowledge gap and foster better-informed treatment decisions and actually keep people safe.

The effectiveness, however, of the federal government postmarket surveillance and risk-communication efforts, along with the industry efforts, has been constrained due to the following limitations:

  • Number one, the quality of the data that we have to deal with.
  • The quantity of data.  Often we only see a very small fraction of the harm that is occurring that is actually reported in some manner.
  • The timeliness of acquiring this data from the health-care system and being able to analyze it.
  • Our capacity as a system -- the federal government, the industry, academia, and the health-care system -- to rapidly conduct postmarket studies when needed to gain the information, to confirm signals that have been arising.
  • The risk-communication tools that are used.  When I started at the Food and Drug Administration, the risk-communication tool was a letter to the doctor, the “Dear Doctor” letter.  We still use that, to some extent.  It is not an effective communication tool.  That is not to say that anything is being done wrong.  It is saying that we can do better than that now.
  • The resources available to all these sectors to actually conduct all this work and to improve the situation.
  • Our current efforts are frequently crisis-driven rather than prevention-driven.  We identify signals and we must go out rapidly and do studies or try to figure out what is going on.
  • Our analyses currently focus on overall populations rather than differentiating subpopulations.  In other words, we, in many cases, do not have the capacity to connect the basic science with what is happening out in the clinic, to say, is this adverse event due to differences in drug metabolism, is this adverse event due to some particular drug interaction, et cetera?  We aren’t bringing all the available science that exists right now to our analysis.
  • Most importantly, I think, there really is no coordinated systems approach this.  We are operating off of systems that were put in place many years ago, before computers really were in wide use.  We have computerized our data collection in some cases or our databases in some cases, but we haven’t really taken a very hard look at what the new world that we are moving into could do for this situation with adverse events and with the harm that is being inflicted on the population.

From the FDA’s standpoint, and I think from the federal standpoint -- our federal partners who are here -- we see that the private sector is moving ahead and taking steps that really can facilitate surveillance activities and help us learn much more rapidly from the health-care system.  They are developing new information-technology tools, through the e-health record and so forth.  They are exploring informatics methods and analyses.  A number of the health-care systems are performing their own analyses to try to improve quality within their own organizations.  This is creating the capacity to conduct within the health-care system, in a new way, postmarket safety assessments, in a way that has not really been done before.

Therefore, we feel we need to think and discuss how to link the private and public-sector efforts together in the future, or have a plan to address the limitations that I have gone over, through better integration of the nation’s postmarket medical product safety activities (the private activities that are growing, the public activities that are happening in many agencies, the industrial activities), to create what might be called a Sentinel Network -- in other words, a virtual, integrated, electronic nationwide medical product safety network -- so that we all are able to use the efforts of each other to the common good of protecting patients and improving our knowledge about medical products.

We would like to aim toward a network that would foster seamless and timely electronic flow of information about medical product safety from what is out there in the health-care arena, as well as from the surveillance reporting systems, through various risk-identification and risk-analysis procedures.  Some of these are being done, as I said, in health care.  Some of them are being done by the Food and Drug Administration.  Analyses are performed by the sponsors of these products, and so forth.  We need to make sure that information is all put together and is transmitted to practitioners in a timely manner at the point of care so that they can practice with the best possible information.

We are not advocating assembling some large, grand database or anything like that.  What we are talking about is to have a public-private collaboration or partnership to build on existing efforts and to connect them, not to create new databases or structures.  We would hope that the network could use national and international standards that have been adopted by AHIC, but also that have been developed through national standards organizations.

Components that we need to look at today -- and I think I need to stress that today is going to be our first and more or less preliminary exploration of this:

We need to look at data collection, who is collecting data where and how, and where is it residing.  Electronic health records are an obvious opportunity.  They capture adverse events of different kinds in the context of medical product use and in the context of clinical practice.  So EHRs or other types of integrated databases that might be used with the EHRs are sources of data that we need to talk about.  There may be other sources of data out there.  We know that there are registries, large registries, that are conducted in some academic settings and so forth, and we need to talk about those.

Risk identification and analysis:  The folks here at FDA tell me repeatedly that one of the things we need most right now, as we begin to get all these data sources, is research into how to analyze them correctly and figure out the wheat from the chaff.

We need integrated research networks, and we need to find out which ones are going on where and what they are doing.

People are discovering data-mining tools.  Obviously, data mining is very advanced in some other sectors, such as defense and so forth.  The question is, what research and effort do we need to do to apply that type of data mining to this problem?

We need eventually to reach agreement on some methodologies.  Obviously, at some level, these analyses and decisions reach a level of controversy, and to have methodologic agreement will be extremely helpful for all of us, I think, to reach some kind of consensus on proper methodologies and how you can draw inferences from these types of data.

How to integrate biology into all of this is something I am very interested in, not just empirically looking at all this, but whether we can actually have ways whereby signals and hypotheses that are generated out of this system are actually translated into the biological meaning.  Is there an underlying mechanistic explanation?  Can we do research to find that out?  That sort of backwards translational research is going to be extremely important.  In fact, I think if the promise of all this comes to fruition, it will be by connecting this back to the biology, so that we actually learn knowledge, not just develop information.  We need to understand why these things are happening so that we can prevent them.

Finally, equally important -- and I don’t know how much of this sector is represented here today -- are the risk-communication aspects of this.  Our system right now for communicating to physicians, other health-care practitioners, and patients is extremely fragmented and isn’t working very well, to be very blunt about it.  We need to not only leverage the medical community’s expertise, we probably need to reach into expertise of those who are much better at communicating.  To communicate to clinicians, this new risk information will have to be integrated into the workflow of clinical practice.  There is no doubt about that.  We cannot simply bombard physicians, pharmacists, nurses with pieces of paper giving them information.  We need to integrate that information in a way that is usable to them during practice.

We know that new decision support systems are being developed out there.  That is another piece that we all need to think about in a Sentinel Network:  How do we effectively transmit this information or make it available to people?

What are the objectives of today’s meeting?

  • We should be able to, hopefully, by discussing across a wide number of parties, evaluate the current needs in postmarket medical product adverse-event data collection and risk identification and analysis -- not only the needs; we should be able to find out, I think, from a wide variety of stakeholders what people are actually doing out there, what they would like to do, what the gaps are between what could be done, what people would like to do, and what is actually going on right now.
    • Then identify the obstacles to and what could facilitate or incentivize developing the components that we need to close this gap and connect these various efforts up into a Sentinel Network.
    • As part of that, we can identify opportunities for public-private collaborations, for assembling the data and for doing the research piece and the analysis piece, the risk identification and analysis components, of the Sentinel Network.

Obviously, some parts of this belong in academia.  They are very research-oriented.  Some parts are analytic and perhaps belong within the federal government and the industrial sector.  For other parts, we need to figure out who would do them.

After this meeting, the public docket will remain open until April 5.  We will review the oral and written comments that we receive on this.  Then we will attempt to develop from this a roadmap for assembling a Sentinel Network.  This might include, of course, additional meetings, formation of public-private partnerships, other steps that we might need to take.  Until we hear from you at this meeting, we are not going to know exactly where we need to go next.

Thank you very much.  I will turn it back over to Jeff.

DR. SHUREN:  Thank you, Dr. Woodcock.

 Agenda Item:  Meeting Format

Now I would like to go over some housekeeping details for the meeting.

We have our federal government agencies represented here at the head table.  In a minute, I will introduce the folks who are up there.

We have also set aside seats in the front here for our invited speakers.  Each of those invited speakers is going to give a 10-minute presentation and we are going to have five minutes of Q&A with the federal government representatives.

Those individuals who have registered to speak will be giving their presentations in the afternoon.
Let me just mention that some folks are having difficulty getting here because of the weather.  Some people are flying in.  Some of the folks who are registered to speak in the afternoon -- if we move a little bit more quickly, I may ask them to present this morning rather than in the afternoon.  We will just try to be a little bit flexible with the schedule.

For those who are registered to speak, we have also provided seats for them off to the side.  When folks do come up to talk, we will try to move it along quickly.  I will ask you to stay within the timeframe, just as a courtesy to others.  Those who have registered to speak will have five minutes, with three minutes of Q&A.  I do have a person sitting up front here who will let you know when you have two minutes left to speak.  We will signal this and we will let you know when you are out of time.  No one is going to throw rotten tomatoes, but I just ask that you try to stay on schedule.

Tomorrow we are going to hold a moderated dialogue between the federal government and invited speaker panelists to discuss concrete steps that we, together -- the public and the private sectors -- should take to assemble the Sentinel Network.  We have also included times throughout the day for the rest of you to participate in that discussion.  We are going to set up a rectangular table in the middle here and have federal speakers and invited speakers sit around that.  Everyone else is going to be around.  We are going to encourage you, at selected times, to get up, give us your input.  If you have other thoughts and ideas, we want to hear about it.  This is really meant to be very, very interactive.  We do want to hear your thoughts.

Again, I remind you that the public docket is open until April 5.  If you do have any written comments, please send them to us.

We hope everyone has registered.  If you have not, please do so.  It just helps us keep a record of who has come to the meeting.

We are going to be breaking for lunch at noon.

[Administrative announcements]

Lastly, I just want to note that a transcript for the meeting and all presentations will be posted on the docket on the FDA Web site.  I ask you to just please be patient; it’s going to take a little time to get the transcript done.  But we will get it out there.

Now I would like to introduce the federal government panelists.  Not everyone is here.  Some folks are having difficulty getting in.  But let me just at least mention them, because I do want to make sure you are aware of the organizations that are involved.

From AHRQ, we have Drs. Jean Slutsky and Anne Trontell.
From CDC, Dr. Dan Budnitz.
From CMS, Dr. Jeff Kelman will hopefully join us.
From the Office of the National Coordinator for Health Information Technology, Ms. Kelly Cronin.
From the Department of Defense, Rear Admiral Tom McGinnis and Lieutenant Colonel Mike Datena.
From FDA, we have Dr. Gerald Dal Pan, Dr. Tom Gross, Dr. Miles Braun.
From the NIH, Dr. Clement McDonald.
From the VA, Drs. Fran Cunningham and Mike Valentino.

For invited speakers, again I will introduce some folks who are not here at the moment.  A number of them will be here tomorrow.

Dr. Jeffrey Hill from the American Medical Group Association and Anceta.
Dr. Ken Mandl from the Center for Biomedical Innovation and Harvard Children’s Hospital.
 Dr. Marc Overhage from the Indiana Health Information Exchange and the Regenstrief Institute.
Dr. Rich Platt on behalf of the HMO Research Network, Harvard Pilgrim.
Dr. Hugh Tilson is going to come to speak on behalf of Dr. Rob Califf, who is at Duke.  Rob will be here tomorrow, but Hugh is going to give his presentation and talk a little bit about the CERTs.
Dr. Chris Chute from the Mayo Clinic.
Dr. Barb Rudolph from the Leapfrog Group.
Dr. Michael Caldwell from Marshfield Clinic.
Ms. Liz Paxton from The Permanente Federation.
Dr. Fred Resnic from the Massachusetts E-Health Collaborative.

I also want to express regrets.  Two folks were not able to make it at the last minute, but they are part of discussions here.  Dr. Garret Fitzgerald from the University of Pennsylvania and Dr. Wilson Pace on behalf of the Federation of Practice-Based Research Networks.

With that, again let me welcome you.  Let me introduce our first speaker, Dr. Jeffrey Hill.

 Agenda Item:  Presentation by Jeffrey Hill

DR. HILL:  Thank you, Jeff.  Good morning, everyone.  Thanks for coming.  It’s a pleasure to be here.
This morning I would like to introduce you to the American Medical Group Association and its Anceta Collaborative Data Warehouse.  I want to give you a quick overview, which will set the stage for some further comments and questions, I hope.

Anceta is the health informatics subsidiary of the American Medical Group Association, referred to as AMGA.  It supports AMGA’s strong tradition in collaborative quality improvement.

Before I start talking about Anceta, I probably should say a few words about the American Medical Group Association itself.  The AMGA represents the leading multi-specialty medical groups around the nation, nearly 300 medical groups, representing about 80,000-some physicians in 42 states, representing the care of about 50 million active patients across the country.  This is done as a team sport.  These are coordinated care-delivery systems, large and small, and medical groups involving primary care, as well as specialty care.  Many have both inpatient and ambulatory care.

You can see here, just by looking at the names of the current board of directors, the scope of the types of members that we have, from the large integrated health-care systems, such as the Cleveland Clinic, Kaiser, Geisinger, to some smaller ones, represented by Everett Clinic in Washington, Mount Kisco in New York.  So it’s quite a broad spectrum of members around the country, also representing academic medical groups in a number of medical centers around the country.

This is just a quick map to show you the spread.  They represent both rural and urban care, pretty much around the country -- not all parts of the country, but certainly representative.  There are a couple of groups even in Hawaii, which is not shown here.

This was formed a few years ago as a result of AMGA’s number of one-off studies looking at outcomes research and quality studies, sponsored or not, where groups would come together and share their data.  Every time they did that, anonymous to each other, just by looking at the data among themselves and others, their level of quality would improve.  At one point, though, the board of AMGA said, “Why are we doing these one-off studies all the time?  Why don’t we form a national data warehouse so we can slice and dice this any way we might want,” because research questions, as you know, and the results that they generate always generate additional questions, and you may or may not have the data to go to the next level of query, unless you have access to the full set of data.

So our objective, on a mandate from the AMGA board, was to put together a national collaborative data warehouse, with the objectives shown here:

  • Mainly to give access to each group that is participating to their own data, be it administrative, ancillary, clinical data, in a standardized, analyzable format for managing quality cost and utilization.  The idea is that everyone is looking at their data in the same standard format with the same kinds of tools, so they can, in the tradition of AMGA, share the knowledge around the data and share best practices in improving care to their patients.
  • The real emphasis of this, though, is to have comparative data among themselves.  This is not about connecting or collecting data within a group, but it is more about collecting, aggregating, and analyzing the data among multiple groups around the country who are looking to get comparative data and benchmarks in areas such as the ones listed here, whether it is practice management, looking at the performance of the physicians or products, be they drugs, medical devices, or biologics, and certainly to continue to look at health outcome studies, the economics of care, and to continue in the tradition of improving the quality of care.
  • There also are a number of treatment and outcome studies that can support evidence-based medicine that we will hope will get back into their workflow, whether it is through their decision-support systems and their EMRs or not -- of course, to improve the process of delivery of care in a coordinated environment.
  • Ultimately, to have a large, rich database for health services research among them.

There are also a number of standard tools.  I say “standard.”  Everybody uses the same kinds of tools, whether it’s for clinical trials, looking at protocol modeling, or looking for cohorts of patients, whether it’s a drug or device trial, or maybe just a quality-improvement or disease-management program, and also to look at results-based performance measures and how we can use these for evidence-based guidelines to improve accountability.

Up front, we all understood that there was a need to underwrite this exercise.  So we are providing, through third parties, a new source of health-care information to all segments of the health-care and life-science industries.  We can talk a little bit more about that later.

As I said earlier, this is really enhanced by the types of relationships among these multi-specialty group members.  We are able to collect and aggregate and analyze comprehensive patient-care data because of the very nature of the medical groups that are contributing this.  They are a coordinated care-delivery system involving in most cases not only primary care, but most specialties, including in many cases inpatient facilities, as well as outpatient clinics and ambulatory care.  There are both in-house and external lab facilities or access to the lab data, and similarly in pharmacies.  These are large enough to either have inpatient pharmacies -- and many of them own their own outpatient pharmacies -- or have relationships with pharmacies, whether it’s through their EMR or PBM relationships, to get access to the pharmacy data -- and, or course, radiology and the like.

The other thing that is very important to appreciate is that these patients tend to receive care for long periods of time, just because of the nature of that group.  In fact, many of these groups are communities themselves because of the large scope of care.  This is even though they may change their health coverage and their payers, the plans that they are involved in.

Typically, our members have been early adopters of health-care IT and new technologies.  Therefore, the depth and the scope and the longitudinal nature of the data that we have access to through our groups are quite unique.

Most importantly, in doing all of this, whether it’s on the group side or on working with the outside world, using this real-world data, AMGA and Anceta are not only viewed, but are actually acting as a trusted intermediary for our physicians and, of course, their patients.

We have already done a pilot, a couple of years ago, with three groups.  We have developed and we are currently operating a production-level data warehouse through a long-term partnership with a company called Convergence CT, using their product suite called DB*FOCUS.  Quickly, it loads and transforms the data from a wide variety of sources.  It is a technology agnostic, whether it is a practice-management system, hospital, lab, pharmacy, EMR.

Many of our groups, particularly the ones we are starting with, have their own data repositories, where we can extract the data in a semi-standardized form.  This securely transports encrypted normalized data into the central data warehouse.  Each group holds its own keys to the identity of the physicians, as well as the patients.  It is beyond being HIPAA-compliant.  We go much further than that in protecting the identities, even to the physicians and the groups at this stage.

Each medical group has access to and control over their data that gets uploaded into the data warehouse.

I am not going to go through this whole diagram.  I don’t have time for that.  I just wanted to point out a few things here.

If you look on the left, this essentially represents this staging server, that DB*F, that is located behind the firewall, under the control of each group.  That is where the data is extracted and located and pre-transformed to a common data schema.  This is pulling from all of their systems.  Then we aggregate that data into the centralized data warehouse, where we can form a variety of datasets and data marts for analysis.

On the right-hand side is a suite of activities that they each benefit from for contributing their data.  They get access to their own data, actually, directly from the staging server that is before the data warehouse, so they can see what their data looks like in the aggregate, as well as individually.  Then we provide routine reports and comparative data, one of our goals.  We have a community collaboration portal, where they can not only access the data and receive reports, but interact in a number of ways to rally around the data.

Once they see a study that comes out, they can download a data cube or a data mart in diabetes or cardiovascular or whatever and dig deeper to understand why they are where they are compared to their peers.

Then we have a host of custom analytics, which could include adverse-event identification, but most importantly, the analysis of that.  Some of these adverse events may need to look for patterns, not just that an event occurs and you have a tripwire that you go back and find.

Certainly, we have a number of industry customers that can benefit from this data.  That is only fully de-identified, aggregated data that they have access to.  This includes not only the pharmaceutical industry, which is shown here, but also payers, other providers, the government, and the like.  So we have a system put in place to make that data available, and this is all in establishment with our groups.

The current status:

  • We had previously established a very comprehensive HIPAA-compliant privacy practice.  This goes beyond just protecting the identity of the patients, but also the providers and the medical groups themselves.  There is a business associate and a data-use agreement.
  • We have, actually, a common agreement.
  • Basically, we have a consensus among those groups.  It is part of our role as an intermediary to help them form a consensus.
  • We have actually nine participants currently loading three to four years of data, with regular updates.  It is currently monthly.  It could be weekly.  Some might even do daily through some feeds.
  • We have actually aggregated the data among five of the participants.
  • We have started, as of last week, analyzing and reporting some of the individual and aggregated data.
  • We are currently selecting the next-phase participants, probably 18 to 20 this year, with the idea that we will load from 10.

This is my last slide, except for the thank you.  To give you the characteristics of those, right now, among those nine groups, we have about 4 million patients or so, almost 4,000 physicians in 224 facilities, representing 24 hospitals.

This is already a mix of our representation across the country -- integrated health-delivery networks large and small, medical groups, and we are just about to start an academic group as well.  All have adopted electronic medical records, some for as long as three to seven years.  They are all experienced in clinical research.  We are going to be doing 10 to 12 groups a year.  So this will go up from 5 million or so this first year to another 10 million or 15 million patients per year, until we have everyone loaded and aggregated.

Thank you very much.  Here is my contact information.  I certainly look forward to some further discussions.

DR. SHUREN:  Thank you.

We are certainly very interested, in terms of data collection and analysis, as to what may be incentives for those who have access to data to actually pull that data together and perform analyses.  I was rather struck, when you were speaking, that what you were setting up was using a standard format from multiple data systems, for multiple purposes -- the data could be used for practice management, for health outcomes, for quality of care -- and then to share those best practices with members.

I wanted to ask you to comment in terms of drivers for getting engaged in this -- one, if some of the incentives for collecting data that then may be used for health-outcomes research or health-services research may be drivers for practice.  You talked about practice management.  If so, how do you work with those drivers to build this database that may be used for other purposes?

The second is, any thoughts regarding economies of scale?  The data can be used for multiple purposes.  What has been your experience from that standpoint?

DR. HILL:  On the first point, we exist, first and foremost, to serve the needs of the medical groups.  It has been a mandate by the board which has organized and directs Anceta’s activities through Anceta’s board of directors, most of which come from the AMGA.

Most of the groups can do their own analysis.  You are going to be hearing from some of them today.  It is not about what they can do themselves as much as how they compare to and learn from their peers.  It is the comparative data that is the driving force.

But since they all use different systems, which may or may not -- you could go out and say, “Well, we are going to aggregate the data from this group and that group, and we are going to get the EMR vendors together.”  But when you have one EMR, you have one EMR.  In fact, sometimes when you have one EMR vendor, you have four different datasets within that type of EMR vendor.  So someone has to do the heavy lifting in order to combine these among these different technologies.  So there is a lot of early heavy lifting on that right side, where we map their data, no matter where it comes from -- whether it is a single site from a data repository or from a practice management and three or four other sources -- to a common data schema, with a hierarchy of coding, not distorting their native coding, but finding some common level of schema and vocabulary, where we can begin to aggregate it and analyze it with some sense.  Otherwise, you have apples and oranges.  We have apples and apples, but sometimes they are different kinds of apples.

So that’s the tough part, getting it into the form where it can -- we talk about interoperability and standard models and whatnot.  That is right on top of all this.

But their real driver is to really have data among themselves so they can see, “Why is my diabetes care not as good as I thought?  If it’s not as good as I thought, is it because our process within our organization is different than group B’s?  Or do we have a group of physicians in that organization who are just not quite following where they should be in our own processes?”

You can look at the average of a bell-shaped curve for a measure.  Maybe you are a little bit lower, but there is this tail over here of maybe 10 percent of your physicians who are outside of that.  Let’s bring those back in.

So it’s process improvement within the group and among the groups.  That is really their driver, improving quality and the process of care -- until more recently, when we are getting into pay-for-performance or performance measures, where we need to have a standard way of helping the payer community understand what quality looks like, let alone pay for.  So there is a new motive that we are all trying to figure out.  Whether it’s PQRI or HEDIS reports or whatever, it’s a daunting task, which has some significant financial ramifications, let alone clinical ramifications.  These are businesses.

The second question -- could you remind me?

DR. SHUREN:  It was that issue of economies of scale.

DR. HILL:  Oh, yes.  We have taken a lot of time with the first nine.  Halfway through the first nine, we shifted gears, realizing what works best, what does not, in terms of loading, as well as aggregating.  We have scrunched the timeframes of loading data significantly, as well as preparing the groups and knowing how to select them better.

This data warehouse is supported by some very large companies that have the big data centers and all of the tools to support the scalability and the flexibility to that.

I think most of the work is done at the individual group level, so the economies are in the process and the learning of the process, as much as that data warehouse growing.  However, if you are looking from the user side of the analysis, the larger that gets, the more comprehensive, the larger the populations are, the more you can subcategorize subsets of those populations for a number of studies.  So that economy is real.

DR. WOODCOCK:  You implied, I think, that as part of the business model, some of the support can come from having the data be mined by others.

DR. HILL:  Yes.  We make that commercially available to the outside world, third parties.

DR. WOODCOCK:  Is that a reasonable source of support for your --

DR. HILL:  It’s the only source of support.  We try to make this as bearable for our members as possible.

We do appreciate the commercial value, as well as the ethical value of having this kind of rich data.  That itself improves care.  Our groups understand that.  We have done it all up front.  They certainly are happy doing that.  It’s always an issue of risk in making our data available.  But it only happens because we are doing it through a trusted intermediary.  We have two layers of trusted intermediary.  We can shut that data flow off at any time if it is not used properly.

DR. WOODCOCK:  What you allow is people to analyze on your database?  In other words, they would request an analysis, or do you actually give them the data, the anonymized data?

DR. HILL:  It can happen both ways.  One could order a dataset or a data mart of particular interest, based on selection criteria of patient cohorts, or the analysis could be done on their behalf by an analytics team that we both are building up, both we and our partners, Convergence CT.  So it’s pretty flexible.

DR. WOODCOCK:  There may be peculiarities of your database that only your own analytics team is able to navigate successfully.

DR. HILL:  That’s correct.

DR. WOODCOCK:  Do you expect that the pay-for-performance, if this becomes more prominent, is also going to be a driver?

DR. HILL:  I think so.  First of all, understanding what the best method is for that -- we like to think of it as pay-for-results or pay-for-quality rather than process.  We have heard people like us say that before.

DR. GROSS:  In this process of developing standard formats and terminologies and the like, do you see eventually that the entities that feed the information into this system will ultimately migrate to those standard terminologies?  Is there a movement to best practices?  Maybe you could speak to what sorts of standard terminologies you use currently.

DR. HILL:  First of all, we think so and we hope so, respectively.  We have found that there is so much discussion and controversy and variation in what we should be doing, both from a technical side and from an analysis side, let alone the measures.

We are not trying to introduce anything new.  We are trying to look at what makes the most sense among our members of the standard kinds of technologies and measures and coding and whatnot, so that we can help define what works best.  We think our groups, as they join, will help to accept and even drive those standards for the rest of the industry, mainly because we can have data showing the value of doing it this way versus that.

As to the coding, it’s a little bit over my head.  I know some of the words.  It’s the standard types of things -- ICD-9 and CPT codes.  We can HL7 feeds and the like.

As you may appreciate, many of our groups are already doing this themselves.  So what we are trying to do is find a common denominator among them that works the best.  But even among themselves, they are all a little bit different.

So we are hoping to influence the national standards by showing what works best.

DR. SHUREN:  Thank you very much.

We will have opportunities tomorrow, too, to follow up on questions.

Let me ask Dr. Mandl.

 Agenda Item:  Presentation by Ken Mandl

DR. MANDL:  Good morning, everybody.

I am going to talk about informatics and take a step back and think about a framework for approaching a Sentinel Network.

The challenge is that the FDA needs to get innovative drugs and devices to the market quickly, while protecting the public from any adverse effects of those drugs an devices.  This is clearly an impossible task.  But the perfect is the enemy of the good, so we are going to try anyway.

I come at this from the approach of biosurveillance, an area I have been working in for the past seven or eight years.  I would like to just draw some analogies.

In two columns here, we have outbreak detection and drug safety surveillance.  Both of these fields were catapulted by crisis precipitants.  For outbreak detection, it was anthrax; for drug safety, I think, Vioxx.

The approach traditionally in outbreak detection had been mandatory reporting, and in drug safety, likewise, a reporting system, MedWatch -- actually, a voluntary reporting system.

Limitations:  There is bias in reporting.  There are delays in reporting.  There is incomplete reporting.  There is limited automation and there are limited communication systems.  That is outbreak detection.  For drug safety, the same.  So I think the approaches are potentially alignable.

The action arm has been shoe-leather epidemiology, telephone calls; the action arm here, “Dear Doctor” letters, other public alerts, label changes.

The evolution has been towards automated surveillance systems, as the system I run for the state of Massachusetts.  In drug safety, I guess that is the question:  What is the next technology we are going to explore to bring this together?  Is it going to be an automated surveillance system of sorts, biosurveillance for drugs?

The Sentinel Network, I think, requires four building blocks, broadly speaking:  data acquisition, signal detection, adjudication of those signals, and communication of those signals back to the public, the decision makers, the patients, the physicians, the pharmacists.

Let’s just take a quick look through these four building blocks.

Data acquisition:  Challenges:  Even in these phenomenal databases we will be hearing about today, much of that information is still in free text.  Another challenge:  Even things like pharmacy benefit management data, a very high-quality data source for medications, is not necessarily complete.  For example, here is some PBM data that we pulled on a real patient and a bunch of drugs he was on.  It turns out that four months ago, the patient switched to an insulin pump, which is not dispensed by the usual mechanism.  It turns out that the patient was actually lowered to a lower dose of this drug, and that was not reflected, because they didn’t get the script change.

Privacy:  As much attention as we pay to privacy, it’s difficult to actually maintain the privacy of patients in large databases.  It is a fairly complex science.  We published a paper that actually showed that a bunch of people were re-identifiable from a whole bunch of articles published in the medical literature, because their home addresses were placed on low-resolution maps.  It turns out that we could actually put those people right back on their houses, as you see in the lower right-hand corner.  So even with the best of intentions, privacy is not always attended to properly.

The opportunities here include processing the free text.  There are actual language-processing tools.  In the National Center for Biocomputing, an NIH roadmap center, my close colleagues are working on processing electronic text records from Partners HealthCare to link with genotypes for biological discovery.

Personally controlled health records is an area that I think is important.  In work I do with Rich Platt, who you will be hearing from shortly, in the CDC Center of Excellence in Public Health Informatics, we are focused on the public health benefits of acquiring data from patients through personally controlled health records -- essentially, records that the patient owns and uses to integrate their data across multiple sources of care, creating a virtual medical home for patients.

There are models for regional and national data sharing that are evolving.  At the Center for Biomedical Informatics at Harvard Medical School, we are exploring these models.  One of them we presented to ONC recently.  I presented this with Marc Overhage, who you will be hearing from shortly.  This is a model that is based on something called SPIN.  SPIN is the Shared Pathology Information Network.  We are actually sharing data across multiple Harvard hospitals, Indianapolis, and a couple of other sites.  I am going to take you through this, in a high-level, conceptual way, very quickly.

Red is the hospital database.  Blue is an external database that the hospital maintains, exposed to the network.  We get a whole bunch of hospitals or data sources, with data exposed to the network.  The hospitals choose what data they put into this external database.  We connect them together through a network.  We take one of these guys and make it into what is called a supernode.  This guy can actually broadcast queries across to the other sites.  What we create here is, essentially, Napster for clinical data, where we have a peer-to-peer exchange of clinical data.

What you can do then is take these sorts of data and share them.  I will show you why this is an important model in terms of control and participation.

If you get a whole bunch of networks together, they can talk to each other if these supernodes can talk to each other.  So each network can actually have regional variability, and even a different flavor, as long as the networks can interact here through these supernodes.

In the biosurveillance model, we are actually shifting our biosurveillance system to use this network model where the biosurveillance system gets data from one of these supernodes.  We demonstrated this, as I said, at ONC across three regions, Massachusetts, Indianapolis, and Mendocino County.  We get data out of one of these nodes.  This allows control points at two places, which is critical.  One is, the institution chooses what it puts into that external database, and two, there are different routes of access to that external database and you can define authority accordingly.

Here is an example.  We could put do biosurveillance initially with anonymized data -- only anonymized data -- exchanged.  Then let’s say we have an outbreak or, in the case of pharmacosurveillance, we have a signal of another kind.  We can query back, with public health authority, to such a database and actually return appropriate data, based on a prior agreement.

So with these two points of control, we actually decide what is in the database, who gets it, under what circumstances.  Those circumstances could include research, IRB approval, public health authority, drug safety surveillance.

Let’s just look briefly now at signal detection.

Challenges:  Discovering unexpected drug safety issues requires expansion beyond pure hypothesis testing.  However, multiple testing reduces specificity.  This is why we can’t have our cake and eat it, too.  We can’t do this perfectly.  We have ROC curves.  We have sensitivity and specificity.  So there are new opportunities.  There are computational approaches to signal detection.  Data-mining approaches can be informed by available data.

In the informatics program, we do work looking at multiple data-source adjustments for multivariate surveillance.  We are looking at new approaches to cluster detection and signal detection.  Just to give a reference, there are a number of methods from biosurveillance that I think are highly pertinent here.

Adjudication:  This is, I think, the hardest part, because there is always a tradeoff between early detection and false alarms.  In this area, if we have an early signal that becomes public, a drug becomes impugned and you end up knocking a pharmaceutical company off the stock market, or at least off kilter.  That is a real danger that we need to confront.  In order to, I think, engage pharma in this, we need to come up with a system where people feel comfortable about the way these signals are adjudicated.  I think engaging Pharma, health care, academia, and government together is critical.

The opportunities here are to provide maximum information for making decisions.  I think we need to use open methods so that everyone can see how the result was produced.  But we need to be able to create some kind of environment in which those methods are discussed, probably not with a representative of the press there in the initial conversations, so that we can look through these data and have a nimble set of action arms from the Sentinel Network.

At the Center for Biomedical Innovation at MIT, such a safe haven has been created and is, I think, providing a very nice model for how this might happen.

Lastly, I will just mention communication.  Communication, again, is a complex area.  A challenge is that it is very difficult to get complete information out to the people who need it in a timely fashion.
The solutions:  We can do tailored and targeted information to individuals through informatics, personally controlled health records, point-of-care services.  We can leverage experience from a number of areas.

In summary, the three areas for focus, I think, to develop a Sentinel Network, the science that needs to happen:

  • One, the computational informatics.  That is the clinical intelligence here.
  • Two is the industrial-societal interface.  I think this is the most difficult one, because there is really no good model for it out there yet that is operational.
  • Three is the medical informatics around the data exchange so that we actually have data flowing in the right ways.

I will leave it there.

DR. SHUREN:  Thank you.

Let me ask if there are questions.

DR. WOODCOCK:  I am very interested in this question, which I also alluded to, of developing open methodologies that are consensus approaches that have buy-in.  I think, if we get enough data, we could impugn every drug in the armamentarium and we could stop using drugs, which would be very safe, but it wouldn’t help people very much.  So we really need to figure out -- and this is, I think, a new challenge.  I would ask all the audience if there is another precedent somewhere in prior human informatics history, or whatever, that we could look at, where we are talking about how to develop, through consensus, methods of analysis that are agreed-upon as reliable.  That doesn’t mean they are going to get the right answer, but it means that we would agree to go forward with them.

I think, if we can do that, we will be able to get all the partners to engage in this.  We share some interests in common in finding out the truth, so to speak, the correct signals and answers.

So I would be interested in your perspective on that.

Also, for any given analysis, say we construct this resource.  We have heard about one resource already.  Say we construct a larger resource.  The question of access to that resource and how that is managed, I think, is another very interesting question that is going to have to be discussed.

I would like your comments.

DR. MANDL:  I really think this is the hard part.  In the outbreak detection world, we started worrying about the same issue.  What we worried about was, when these things went off and signaled that there had been an anthrax attack, how would we inform society that there had been an anthrax attack?  What would be the tradeoffs to an early warning, with panic and stock exchange.  It turned out that in seven years of this, none of these systems has signaled an anthrax attack.  We kind of forgot that that was what we were initially worried about in these sorts of decision frameworks.

But I think that we are not going to have that kind of honeymoon period here in drug safety surveillance.  Quite the opposite.  The interesting thing about sharing this data is that the data really will need to be available so that the analysis on which decisions are made can be replicated, and the methods need to be open so that they can be replicated.  I think replicating the same finding across multiple datasets and having a multiple-modality approach to following up on early signals, and one that is rapid, is going to be the key.  So having that nimble -- and I think there are probably a lot of individuals in the room who represent just such organizations and are stewards of such sources of data and sets of skills for analyzing the data, who could become part of a network that rapidly responds to an initial signal.

Then we have to understand the value of data in different datasets and by different methods, in order to know when we are starting to reach something that represents a preponderance of evidence.

DR. SHUREN:  Other questions?

MS. CRONIN:  You mentioned the NHIN and the regional emerging models for data exchange.  In the work that you have done so far, have you started to identify the requirements or the data models for drug surveillance, in particular?

DR. MANDL:  I think to get started in this, it would be some of the standard clinical data that we need to identify adverse events, plus medications.  The more detailed information we have about medications, the better off we will be in terms of being able to truly define that exposure.

But I really think it’s more the analytics about how we use those data than the data.  I think the data are, on the whole, the usual suspects.  Then really being able to define exposure and outcome is another set of tricks.

MS. CRONIN:  I’m just thinking, in terms of developing the infrastructure that will be evolving over the next year, particularly as the next round of the NHIN gets funded and it does have more of a regional and state orientation, it would be helpful to know if there has been already some thinking and some work done on what the requirements are, whether you need a hybrid approach, whether you need some kind of repository so that you can have easy access to data that you need to do this type of analysis.  Or could it be something completely federated, where you are just going to query on demand?

That is an example, but there are a lot of other issues that would need to be resolved, I think, to make this work on a regional or statewide basis.

I am just curious to know whether or not anyone in CBI or maybe in Regenstrief has thought through that a little bit.

DR. MANDL:  I’m sure Regenstrief has thought about this.  I’m sure Marc will have an opportunity to respond as well.

I think it is a hybrid approach.  Some data will need to be put together so that you have a broad sweep.

On the other hand, I think there are individuals here representing very large organizations in which data could be analyzed locally and signals then combined subsequently.

So I think it will be some combination of combining data and combining signals that we need to develop.  A lot of that is going to be beyond what would be ideal, but what would be practical from a sociopolitical perspective.  I get the feeling the data will tend to stay where it is, no matter what we do, and so we should definitely conceive of a hybrid approach.

DR. BRAUN:  One thing that you said struck me.  I first want to make sure I heard it correctly.  I think it was with respect to biosurveillance, where you were saying that the sites that were in the periphery would send in the information that they wanted to.  You did say that?  Okay.

In population-based reporting systems, my understanding is that they operate under the understanding and the standard that there will be complete reporting and you will know denominators and numerators.  Can you explain how analysis of such data, where you would have heterogeneity of what is submitted, would be accounted for?

DR. MANDL:  In this model you would have a core dataset that would be a minimum dataset.  But then you might contribute more, let’s say.  For example, in biosurveillance, perhaps some institutions would contribute detailed location data -- perhaps some would be at the level of the zip code, some at the level of the county -- and then you would deal with what you had when you got there.  But you would have the widespread participation at that level, rather than choosing just the least common denominator and posing that widely.  It’s a way to get least common denominator plus additional information from different sites.

DR. BRAUN:  Certainly, then, with the common dataset, the advantages are clear.  The analytic complexity of dealing with heterogeneous submissions from large numbers of sites is certainly something that impresses me.  That is just an observation.

DR. MANDL:  Absolutely -- and depending on the precise application for the data.  We are talking about repurposing networks for multiple causes.  Let’s just say that clinical care is cause number one.  With clinical care, the more, the better, but we will take what we can get.  That’s clearly the way it is.  Right now what we have is nothing, usually, when a patient shows up.  So anything is good.

So the system is designed to think about all these different opportunities.

You are absolutely right.  For certain research purposes, you really may want to define a common ataset across those sites.  But by having the same network making data available for clinical care, for research, for public health surveillance, I think the institutional control over what goes in will obviate the need for consensus building, which doesn’t usually work very well.

DR. SHUREN:  Thank you very much.

Dr. Overhage?

 Agenda Item:  Presentation by Marc Overhage

DR. OVERHAGE:  Thank you very much.

You can tell who didn’t get their presentation in on time.  Ken is always two steps ahead of me.  Thank you for setting things up so nicely, Ken.

I am going to share a little bit about the work that we have been doing over the last 15 or 20 years to build a regional population-based health-information exchange, and then a little bit of the experience that we have had in leveraging that, primarily for using it as a tool for measuring outcomes in various kinds of clinical studies that we have been doing.

This infrastructure that we have built, which we have been calling for a while the Indiana Network for Patient Care, is in many ways comparable to the kind of infrastructure that Ken was describing in a local market.  The market that we are most deeply penetrated in today, just to give you a sense of scale, is this:  It’s about 120 miles square, about 4,000 physicians, about 1.9 million people who live there today, although, obviously, data are captured longitudinally, and so we have data for about 3.5 million people in the repositories today.

The way that we attack this problem -- and Ken talked a little bit about leaving the data where they are ‑‑ one of the challenges that we find is that, leaving data where they are, you run into all kinds of little “gotchas.”  For example, laboratories keep data for a very short of time, typically -- 30 days, 60 days, something like that.  Other systems roll things into offline third-tier/third-world storage that you can ostensibly retrieve it from, but you may or may not be able to.

One of the challenges is that, even when you do that, when you centrally manage those data or you leave them where they are, there are some core challenges you have to address, including how you link together the patient’s data.  There is not a common identifier.  The PBM uses one descriptor for the data; the health plan uses another; the physician practice; the three hospitals they have gone to.  Just think for a moment about, for example, the Medicare population, who sees, on average, six physicians a year.  They are not all in one place.

So one of the first challenges is how to link together patient data.  Every institution typically has some kind of unique identifier for the individual patient, associated with a variety of demographic data -- date of birth, Social Security number, things of that nature.  That unique identifier, unfortunately, isn’t linked to anything else.

What we do is, we take advantage of the fact that everybody has a relatively common set of demographic data.  Currently we use a fuzzy deterministic algorithm to link together those unique identifiers across multiple institutions.  We have examined a variety of other strategies as well.  It turns out you can do very, very well.

I should point out that even if you have a common national identifier, it doesn’t solve your problem, because those are erroneous a substantial number of times -- the same kinds of problems with Social Security entry or anything else.  In countries that do have a national identifier, it is often not used as the identifier.  For example, in Great Britain, if you go and try to retrieve the patient’s data using their National Health identifier, it is often not a searchable field in the database, in the practice-management system.  It is recorded as an attribute of the patient.  And there are the usual problems of people who don’t have one, who have multiples, and so on.  So you still need these kinds of strategies, even if you have a better national identifier.

We use those demographics to deterministically link together those different identifiers for the individual.

The second challenge that we have is that the data are simply not identified in a common way.  A serum sodium at every one of the laboratories in this country is called something different.

Unfortunately, we are not so clever in solving this problem.  We use the example here of immunizations.  Different organizations have a different way to identify immunizations.  There are national codes for these things ‑‑ in this example, LOINC codes.  Unfortunately, organizations don’t use them, by and large.  There is a whole variety of reasons for that.  It is going to be quite a while, I believe, before those are adopted and driven into the operational systems, like laboratory systems, radiology systems, and so on.  There is a whole variety of reasons behind that.

So we take these standardized national codes and map them -- and, yes, that is a lot of work -- so that we know that a serum sodium from each of the different services is LOINC code whatever-it-is.  (Contrary to public opinion, everybody at Regenstrief hasn’t memorized the entire LOINC dictionary of 40,000 terms, so I don’t know what it is.)

The other thing I will say about this is that people often forget that physicians also have multiple identifiers.  I have now 49 unique identifiers as a physician in our market, including my national provider ID.  Yes, I am one of the 1.3 million who has registered so far.  But I have 46 other ones, as well, that are in common use.  We use there a human-mediated process to link together.  In other words, we have a person who sits down with the physician and their practice-management team and says, “Yes, that’s him.  Yes, that’s him.  Yes, that’s him,” typically because there is not enough metadata to combine that together.

If you go through this process over and over again for a variety of different health-care participants, including payers and others, we start to create -- Wes Rishel coined a term that I like, “edge proxies” -- in other words, these databases of the nature that Ken described that represent the data from this individual institution, separated and segregated, and in our case centrally managed, for a variety of reasons, with that global patient index, a global provider index, and a concept dictionary that link that information together.

One of the other things that we do is -- a great deal of information is still locked up in text reports.  Even if a physician has an EMR, many of them use it as a glorified typewriter, and the information that is retrievable from that system is a blob of text.  So we incorporate various levels of -- I put it in quotes, because I hate to glorify it by calling it “natural language processing.”  We use that set of tools to extract key concepts, including, very importantly, negation -- in other words, “The patient denied nausea and vomiting,” is a very common kind of expression to find -- and store those back into the repositories as structured data so that they can be used.

Basically, we capture data in real time from data sources -- laboratories, radiology, PBMs, things of that nature -- clean them up, if you will, and then we drive a variety of processes off of that.  So the Health Information Exchange really is that common flow of data, structured in a consistent fashion so it can be used for a whole variety of services.

As Dr. Hill described earlier, it’s critically important that you do that, because you need to use these data for more than one thing if you want to be able to pay the bill at the end of the day.  These are expensive processes, to normalize these data, to build this infrastructure, to capture things.  If you can’t find multiple income streams, you are going to have a hard time funding it at the end of the day.

But very importantly, in the middle of that is a concept that we call “negotiated access.”  Everybody owns their data.  We subscribe to the principle that the patient owns the data, but people are custodians of the data.  They all want to make sure they know exactly what is going to happen to their data, when, by whom, and I think, going down the road, as these data are used for commercial purposes, how much am I going to be paid?

We have used this infrastructure, as I said, primarily to look at outcomes and research projects for adverse drug events.  We essentially do that with a batch process that each day scans this federated repository with a distributed query and identifies candidate or potential adverse drug events.  But we have found that for study purposes anyway, we have to incorporate a human review process in order to really determine whether those are true adverse events or not, for the usual reasons that you run into.  The fact that somebody died while they were taking a drug obviously doesn’t mean it was due to the drug, but they did.  You can build various levels of intelligence in to refine that so that the positive predictive values get up into the range of .2 or so -- in other words, that 2 out of every 10 represent an actual event.  That is about the best that we have been able to do to date.

We incorporate a process of tiered review, where the computer identifies events from the database; data analysts -- not clinical people -- are able to screen about 90 percent of those out.  They are able to say, “Nope, it ain’t an adverse drug event,” based on some tools and algorithms that we have developed.  Only a minority require clinical review.  We only do this when we are doing projects.  The reason we do that is that even this approach, which is fairly cost-effective compared to traditional approaches, costs $40.00 for every ADE we find.

So that is about where we are today in our ability to do that.

As I say, for a variety of studies we have done ‑‑ these are some data from a collaborative study we did with the folks at Brigham and Women’s Hospital, using this same approach, same set of rules for identifying ADEs in the two markets, and over the course of a four-month period, identified a variety of ADEs, with, as I said, a positive predictive value of around .2.

The last thing I want to close with -- and Ken, fortunately, set this up, so I don’t have to explain it all ‑‑ is that we have been leveraging this SPIN tool, the Shared Pathology Information Network tool that Ken alluded to, which was created under an NCI contract, as a tool for discovery, for data mining, if you will, that allows us to do exploratory queries in a completely anonymized fashion across that very large distributed database.  As an example of this, a few years ago, one of our fellows at the time heard at a conference a case report that suggested that erythromycin might be contributing to the incidence of pyloric stenosis in infants.  It is fairly common for infants to get erythromycin for a variety of reasons.  He came back and, within 48 hours, was able, using this tool, to very convincingly demonstrate that, yes, indeed, there was an extremely strong association.  I think that has been taken forward into common practice today.

That was the last thing I wanted to share.  I look forward to the conversation throughout the day.

Thanks, Ken, for saving me several minutes of discussion and keeping me on time.

DR. SHUREN:  Thank you.  I like that sort of economic approach of trading off time between speakers.

Let me ask if there are any questions from the panelists.

DR. GROSS:  I have a question.  This pertains to the last speaker as well.  Don’t forget about medical devices.  To what extent are you thinking about medical devices?

DR. OVERHAGE:  It’s a very important question.  Given where we live, with Cook and most of the orthopedic devices in the nation manufactured in the northern part of the state, I am very sensitive to devices.

Devices are generally poorly captured in routine clinical care today.  We have been working, but have not gotten very far down the road, on focusing on capturing the details of the individual device more reliably in routine clinical care processes, not just for purposes like this, but also for purposes focused around economic analysis and a whole variety of things -- the data reuse issues that Dr. Hill alluded to and I am sure you will hear about the rest of the day.

Certain things -- for example, implantable cardiac devices -- we are actually pretty good at now.  I can tell you if you have a drug-eluting stent, of what brand and so on, fairly well across that patient population.  But we have a long way to go.

DR. GROSS:  Just briefly, how did you get to that point with ICDs?

DR. OVERHAGE:  We didn’t rely on ICDs to do that.  Instead of relying on the billing codes, we capture data from clinical information systems.  For example, these particular data we focused on capturing through cath lab systems.  There is often a way to record this kind of detailed information in the clinical care system, but people don’t always do it.

So building the process and the value case for capturing that at the point of care has been the challenge.  As I said, for cardiology that is going well.  For orthopedics, we have a way to go yet.  We have lots of other things on the list.

DR. SHUREN:  Let me just ask one follow-up on that.  Whereas for drugs we have a national drug code, we do not have a national system for device identifiers, currently.  How do you wind up, then, identifying devices?  Do you have some internal codes being used or some other way of capturing that information?

DR. OVERHAGE:  It’s a very good question and opens a wonderful door that I will take advantage of.

The question about the device codes -- because we have taken a fairly targeted approach, initially, and focused on some of the cardiac devices, we relied on manufacturers’ model numbers as a fairly simple way to represent them.  They obviously needs to be a more robust way to deal with that.

The door that you opened that I will take advantage of is, NDC codes stink.  I think anybody who has worked with them will tell you that one of the big challenges is that 10 percent of the NDC codes we see in the wild are not real.  In other words, if you take streams of data from clinical systems, 10 percent of the time you don’t know what it is.  Second, for much of what we do, it doesn’t aggregate things at a helpful level.

The good news is that I think the very strong work that the NLM and FDA and others have been doing on RxNorm is coming a long way in creating tools that will facilitate that.

I appreciate you opening that door, because it’s one of my soapbox issues.

DR. WOODCOCK:  May I just add something to that?  What we are doing on drug registration and listing also affects the sort of bogus NDC codes, but it won’t affect the aggregation, which the medication terminology is aimed at fixing.  So we are trying to work on that.

My question:  You said that local groups are going to want to continue to have control over their data.  Does that mean per analysis?  If you were to participate in a national analysis of some adverse event, that would be on a per-analysis basis that you would like to have that control?

DR. OVERHAGE:  That is a very good question.  In other words, we have this wonderful store of data now.  How free can you be with it?  Again, we always treat them as the participant’s data.

The way that we have managed that -- and I don’t know if this is the answer -- there is a local management group with broad representation that we bring categorical uses to, not individual uses.  For example, public health surveillance is a use case, if you will, that we brought to the group, and they said, “Sure, under the following terms, you can do anything that you need to do for public health, as long as you don’t cross this line.”  For research purposes, as long as it is anonymized and so on -- you don’t cross that line.

There are things that they get real nervous about, and so there are things that get reviewed.  But by and large, the process has been, single IRB approval, if IRB approval is appropriate, single point of review for categories of uses, and then as new and novel uses are created, we stimulate a lot of discussion and dialogue.

MS. CRONIN:  Marc, I just want to thank you for what I think is your 10th presentation at an HHS public meeting this year.  It was very helpful.

I am wondering, since you have had a lot of experience in working with industry and other academic partners in doing this kind of work, and you also are very familiar with everything happening in ONC, how do you think this whole area could be advanced?  As we know, the NHIN is going to be rolling out.  What is the best way for FDA and academia and a lot of these regional initiatives to be working together over the next couple of years?

DR. OVERHAGE:  It is obviously a big question.  If I knew the answer to that, we could all move a lot faster.  It is not a simple answer, I don’t think.

I think some of the key things are -- and Ken set up a number of them very well -- and I think drug safety is a wonderful example for working this through:  How do you take a resource, an information resource, which certain people feel like they own or it has value, and bring that to the public good?  Those kinds of policy sorts of issues, I think, are the one of the great challenges.

The second area that I think needs a lot of attention, as Ken described, is the analytics.  I didn’t talk about that at all, but I think that is an extremely challenging area that we have just scratched the surface on.  Actually, some of the dialogue earlier -- it’s easy to do analytics when you kind of control everything and it’s all laid out nicely.  When you accept the messiness and ugliness of the real world, the analytics get a lot uglier and messier.

I think those are the two developmental areas, in parallel to all the other things that have to happen in terms of moving data standardization and so on forward that there are many initiatives around the country already trying to move.

That was a lousy answer.  Hopefully, by the end of this meeting, I will have a much better answer.

DR. SHUREN:  Thank you very much.

Dr. Platt?

 Agenda Item:  Presentation by Rich Platt

DR. PLATT:  Let me start by saying my name is Richard Platt.  I am wearing three hats in talking to you today.  The first one is a pointed hat.  I am a pointy-headed academic at Harvard Medical School and Harvard Pilgrim Health Care.  I am also principal investigator of the HMO Research Network, Centers for Education and Research on Therapeutics.

I am sharing the honors with my colleague, Hugh Tilson, in saying some words on behalf of the 11 Centers for Education and Research on Therapeutics.  The CERTs are a group of centers that were mandated into existence by Congress to be a trusted national resource in therapeutics and are administered by AHRQ in coordination with the FDA.

My thrust today is to talk with you about a large, robust system for active surveillance that I believe we could bring into existence within the next year or two.  Dr. Woodcock did a terrific job of setting us up on what we would want from an ideal sentinel system.  I will just repeat what I think are the two highlights, which are to prospectively monitor all aspects of therapeutics and to be able to perform confirmatory studies when we find problems.

My colleagues at the CERTs think that what we really need is a drug safety toolbox.  I am going to concentrate my comments today on the active surveillance component of that toolbox, and within that active surveillance drawer of the toolbox, I would like to spend some time on the use of claims databases, which are large and essentially ready to use for the purposes of drug safety surveillance.

There are already three very well-established examples of the use of claims databases for postmarketing safety.  The CDC has for over 15 years run the Vaccine Safety Datalink, which is a collaboration of eight HMO Research Network health plans, including four Kaiser plans, 7 million lives covered.  The emphasis has been on pediatric vaccines.

The FDA maintains a collaboration with four groups.  One is the HMO Research Network, another is the California Kaisers, a third is United, and there are two Medicaid plans, covering about 20 million lives.

Finally, the VA system has done some extraordinarily good work on postmarketing safety.

The hallmark of each of these successful systems is, first of all, their ability to link the kinds of data that Jeff and Ken and Marc have talked about.  In addition, they have access to full-text medical records when they need them.  In places like Indiana, that can be from electronic medical records, but mostly these days, it’s the go-get-the-paper-record-and-look-at-it.  That is necessary for a tiny fraction of records, but it is a critical piece.

However, these data resources are really insufficient.  They are insufficient, in large measure, because they are just not large enough to answer the pressing problems that confront us.  Let me give you a real example that my colleagues and I are working on now.

There is an important public health and regulatory question about whether Menactra, the meningococcal conjugate vaccine, causes Guillain-Barré syndrome, a demyelinating condition that is almost always very serious and is not infrequently fatal.  The vaccine was approved in the spring of 2005.  The Advisory Committee on Immunization Practices recommended immunization of all adolescents.  Within 15 months, there were about 15 spontaneous reports of Guillain-Barré syndrome.  The manufacturer reported that about 5.7 million doses had been distributed by then.

The questions were, was this an excess risk of Guillain-Barré syndrome, or was this merely the background rate of a rare condition?  If it is an excess risk, how large is that risk?  Is there a high-risk subgroup?

The Vaccine Safety Datalink jumped right on that.  After a year and observing close to 100,000 vaccine doses, no cases of Guillain-Barré syndrome were observed among vaccinees.  That is not a big surprise, because the background rate is only 1 to 2 cases per 100,000 person-years.  So the only conclusion you can draw after a year of considerable attention by the nation’s foremost surveillance system is that we need a larger population.

Where could you go for those larger populations?  I think there are three big sources that aren’t currently being adequately used.  One is Medicare, now that it has Part D data.  Second is Medicaid.  Most large states have data systems that could be very useful.  Wayne Ray’s work in Tennessee and now in Washington State, I think, illustrates the potential for using the data for that purpose.  Then there are private health plans that represent a couple hundred million people in the U.S.

I will say again that it is critical to have the ability not only to work with the link data, but to have access for a very small number of individuals to full-text records.  Every discussion I have seen of plans to make Medicare data available are either silent on the ability to go back and get medical records or explicitly exclude that possibility.  I think that would be a great setback to the potential utility.

If you ask which of these data sources could we use to answer the question about the meningococcal vaccine, the answer is really only private health plans, because neither Medicare nor Medicaid has enough adolescents, who are the prime target for this vaccine, to be able to study that.

We are embarking on a health-plan study that would be what epidemiologists would call a plain-vanilla cohort study in four plans that have 40 million members.  I think Miles Braun was exactly right in saying that you need to have defined populations.  By their nature, health plans serve defined populations.  This study will use linked claims to identify individuals who are immunized, to sort out post-immunization person-time from other time, to identify potential cases, and then do medical-record review to confirm those potential cases.  This is a study that has been vetted with FDA and CDC.  There are clear rules for making interim reports to the regulatory agencies and a public report at the end.

On the basis of that kind of experience, the CERTs are fairly well along in planning to develop a health-plan consortium for public health.  Its goals will be to improve on the safe use of marketed vaccines and prescription drugs.  The target population is 100 million individuals.  This will be an activity of the CERTs, which I described very briefly while the slides were coming up.

The aims are to address both the prospective signal detection for therapeutics and to be able to do detailed assessments to follow up specific questions that arise; additionally, to identify unsafe use of therapeutics.

An example of the kind of thing that we think should become normal behavior in our society is a large-scale example of the kind of work we did in the HMO Research Network doing a “what if.”  This is work that AHRQ and FDA sponsored, in which we looked back at the HMO Research Network’s 7 million members and asked, what would have happened if we had been looking at Vioxx and acute myocardial infarction month by month as data accrued.

In the red line are the MIs that would have been expected if they had had their baseline risk.  The blue line shows you the actual occurrence of myocardial infarctions.  The arrow shows you that there was a signal, a P-less-than-.05 signal, adjusting for multiple looks, at month 34.

The important point here is that that signal occurred after 28 myocardial infarctions were observed in the Vioxx-exposed group.  If we had had 100 million people under observation, that signal would have occurred within the second or third month after the product was put on the market, well before there were other kinds of intimations that there was a problem.

We think long and hard about how you would govern such a consortium.  It is our belief that this needs to be a public-private partnership, with all stakeholders represented.  It needs to have a broadly representative governing board.  The research should be limited to public health priorities, which would be determined by the board, with input from a council of stakeholders.  Specific research topics would need to be ones that serve an important public health purpose, as manifested by the agreement of a federal agency that the work would be of use to it in its official functions.

The data sources can pretty clearly be health-plan standard claims data, as long as there is access to full-text medical records.  There is plenty of potential to include other data.  I subscribe to the notion that has already been much discussed this morning that the health plans need to retain control over their data and the uses of their data.  We are confident that there are ways to do it -- for instance, in the ways that Marc just described.

Transparency, I think, is a key to any system that we build.  Our view of the way to do that is to have the study protocols be available for public comment before they are finalized, to have those protocols available to the public at the time a study commences so it is quite clear what the targets were, so that post hoc analyses can be done, but we will know which ones were intended in the first place.  The results need to be placed in the public domain.

Both confidentiality and privacy are critical features.  We have already talked about that some this morning, so I won’t spend too much time here, except to say that it is important to be attentive both to the privacy concerns of individuals and to the concerns of the health plans whose data would be used here.

Finally, the funding model that we think will make this work is to have core funding to support the infrastructure that will be needed and then additional funding for individual projects.

My concluding thought is that existing health-plan data exist and will allow us to substantially enhance the timeliness, the power, and the efficiency of postmarketing studies, and that this information should complement other large national data sources, specifically the Medicare and Medicaid data that exist and can be used to better effect than they are now, the VA data, and the Vaccine Safety Datalink.

Thanks.

DR. SHUREN:  Let me ask if there are questions from the panelists.

DR. BRAUN:  A hundred million people -- now you are talking big.  My question is, do you believe that will be representative of the United States?  You are over a third there, but there is still a potential to leave out important groups.  I just was wondering if you could address that.

DR. PLATT:  I will subscribe to Ken Mandl’s point, that the best can’t be the enemy of the perfect.

I think 200 million will be better, and it is probably within our reach.  But 10 large health plans will get us most of the way to 100 million.  Since there is some element of having to vet the data -- much as Marc talked about, there is a lot of important stuff in the details of putting the data together.

I promised you something that could be operational within a year or two.  I think it’s realistic to have 100 million within a year or two.  There are some very interesting developments in the wind that might make 200 million possible much sooner than that.  But I think we can get a pretty good handle on a lot of the problems of interest.  I don’t begin for a second to say that this should be the end of a Sentinel Network system.  I think a lot of the other things that are being discussed today, and that I sort of flew through on my third slide, are important components of the full system that we need.

DR. BRAUN:  Thanks for pointing that out.  I think it’s great.  The big picture is, that’s great and it sounds like a wonderful idea.  I am just wondering, are there any important gaps that you want to point out -- racial/ethnic groups, age groups, or something like that?

DR. PLATT:  Sure.  Health plans are underrepresented in the Medicare-age population.  It’s hard for me to imagine that a rational society would not make good use of Medicare data.  If there is one thing to do, I think it’s to make sure that we make good use of Medicare data at the soonest possible opportunity.  That is one big place where I think we need to have information.

I don’t want to take a lot of time, but there are other important segments of society that I think we would have to address in a variety of different ways.

DR. BRAUN:  We are trying to work on the Medicare piece, as you know.  Thanks.

DR. WOODCOCK:  I have a question on the assembly of this.  It’s very good to know it’s feasible.  You had on your slide a foundation as one of the -- you had a bunch of things about different partners.  How would you view this as being housed?  By the CERTs?

DR. PLATT:  Yes, I think the CERTs are a credible organization to lead this, by virtue of their standing within AHRQ and FDA, and the fact that they are mandated to conduct public-private partnerships.  There are a lot of policies and procedures that are already in place, and a certain amount of transparency.  So I think the CERTs would be good candidates for doing it.

DR. WOODCOCK:  Are the CERTs set up to raise money from other sources?

DR. PLATT:  The short answer is yes.  The longer answer is, they knock on sort of the usual doors.  The CERTs don’t have their own checkbook.  But, yes, the CERTs are in a position to receive funds from a variety of sources.

DR. WOODCOCK:  Thank you.

DR. SHUREN:  Other questions?

[No response]

Thank you very much.

We have veered back on schedule.  It is now 10:00.  We will take a break.  We will start back up promptly at 10:15 and continue with the other invited speakers.

(Brief recess)

DR. SHUREN:  Let’s take up where we left off, and I will ask Dr. Tilson to come to the podium.

Agenda Item:  Presentation by Hugh Tilson 

DR. TILSON:  Thanks, Jeff.  Thanks, all of you on the panel.  Good morning, everybody.

We are going to try to get back on time.  Therefore, I was the first after-break speaker, because I can speak very rapidly, as you know.  If you will just listen at your usual pace, I am going to speak at mine, and I suspect you will finish this talk before I do.

I thank Rich Platt, particularly, for starting his comments with the analogy of wearing many hats.  For those of us who wear so many hats for so long, this is the consequence of that.  I am wearing many hats today, including the hat of Dr. Rob Califf.  I bring you greetings from Rob, who was doubly booked.  So I am Rob Califf for the day and wearing his nametag.  He will be back tomorrow to wear his tag and give you the usual Duke countryside of what the University of North Carolina says so specifically today.

I am a clinical professor of public health leadership at Chapel Hill.  But I also am here speaking specifically wearing my hat as the chair of the National Steering Committee for the Centers for Education and Research on Therapeutics, CERTs.  You already heard some comments about the CERTs and some questions, particularly from Dr. Woodcock, and some terrific answers from Dr. Platt.  In fact, Rich, I thank you not only for the analogy of wearing many hats, but also for giving much of my talk.  That was good, because it buys me a few minutes to give the second half of the talk.

Before I do, let me be sure that I fill in a few of the blanks, for those of you who may have found that Dr. Platt glossed over the Centers for Education and Research on Therapeutics option a bit too rapidly, particularly was it was touted by him, and as I certainly agree, to be perhaps the best answer for public-private partnership available for the FDA right, and with proper support and its ability to generate resources, perhaps our next best key to unlock the riddle of sentinel systems.

The CERTs were mandated by the FDA Modernization Act -- critical for you to know also, because that was the act that added public health to the FDA’s mandate and then broadened its mandate to do this kind of therapeutics education and research.  Although for them a mandate, it is AHRQ-funded, nationally coordinated and steered by a National Steering Committee, which I chair.  There are 11 centers.  More about those in a second.  Funding comes from AHRQ -- thank you again, AHRQ colleagues -- with core grants, but the funding for individual projects comes, as it would for any such program, from a mixture of core grant funding, funding from the FDA, particularly, as you saw from Rich Platt, for some of the HMO Research Network and other large Sentinel Network activities that are ongoing, and look at the issues of risk -- or, better said, the balance of benefits against risks -- as a key question for therapeutics research.

This over-busy organogram simply lists the currently 11, soon to be 12, centers.  Thank you again AHRQ and Congress, for recognizing that we can’t afford to cut back on education and research on therapeutics, at a time when safety and safe use are so prominent in the national agenda.  The support that has come for these CERTs is deeply appreciated.

We are organized, as you can see, around a Coordinating Center.  Dr. Califf, whom I am also partially representing today, is the principal investigator of that Coordinating Center.  Each individual Center for Education and Research on Therapeutics undertakes special emphasis either on structure and methodology, as you have heard from the HMO Research Network, a population -- for example, children, from the University of North Carolina School of Medicine and School of Public Health’s CERT -- or a disease category -- for example, the Duke CERT, represented so ably here by its principal investigator, Dr. Kramer, for cardiovascular disease -- the reason that Dr. Califf is down at the stents meeting today.

Overseeing the activities there is a National Steering Committee, which has participation by FDA, AHRQ, other major federal agencies, the private sector, including Big Pharma, for which we are deeply grateful, and strong consumer and professional representation, just to be sure that the multidisciplinary public health nature of our work gets the kind of external, ongoing scrutiny that it needs.

That, then, is the theme of what I want to say.  The way forward for sentinel systems, clearly, for the Food and Drug Administration must be partnering.  Rob Califf, who isn’t here today, and I had a good caucus about it.  Rob would have been a bit ruder and abrupt in what he would say -- oh, one of my favorite talks of all time was one Rich Platt gave which he did as a substitute at the last minute.  His first slide was, “What Ed Would Have Said.”  So, Rich, this is what Rob would have said.

Rob would have said, the problem is that everybody is pointing in the right direction and nobody is going there, and particularly, you have already heard, the CMS, with Medicare and Medicaid data.  As the sine qua non here, we simply cannot, as a responsible nation, fail to harness these data, and harness them in a way that the responsible research enterprise can work collaboratively to get the job done.

The VA, so ably represented here and elsewhere in this area, likewise is looking for a way, strategically, to align, particularly with the CERTs, and the Department of Defense, no less so.

NIH -- and Rob would have talked more and will tomorrow talk more in conversation with you all -- about the CTSAs and their networks of networks, and the need for those networks to network with the CERTs.  Specifically, every CERT will have a CTSA associated with it, but not every CTSA will have a CERT.  So the question is, for those that don’t have a collocated major national node for therapeutics but are doing therapeutics research, how can we network those?

Finally, Rob would have pointed aggressively -- but you already have, so I don’t need to do it again -- to the health plans themselves, who must be part of this solution, making the data available through learned, responsible intermediaries, probably in a nested fashion, as you have already from the prior speakers.

I have two challenges that I want to lay out today:  first, a homework assignment, and then a worry.

The homework is to you all and to you all.  As you listen today and converse tomorrow, please be thinking of what we are talking about here not as a big public or public-private program, but a major, bold, and vital national experiment.  The job of leaders is to lead in the face of uncertainty, not to pretend that there is certainty.  The corollary is to identify that uncertainty and work to clarify it.

So we need a research agenda in this area.  The CERTs stand prepared to work with you in clarifying the research agenda, as you all surface it.  But as you hear people making assertions, please ask them their assumptions.  Where the data are not forthcoming, the best methods, the best cross-design synthesis approaches, the best ways to get uniform coding or, if non-uniform coding, reconciliation, the best way to get multiple individual research projects aligned well enough so that we can have a number that is more than four -- namely, the numbers of replications, but actually the numbers of people in the population -- all of those are questions, not answers.  As we think together about the research agenda, look for the CERTs as a good place to work on it.

That is my request.

My worry is one big assumption that I have already heard again and again today.  I know you are going to hear it.  I raise it for you because I think we must work together aggressively to address it.  We don’t have a trained public health research and service workforce out there to do the necessary job of drug safety in America today.  I think it’s worse than that.  We don’t even have a workforce study to talk about how many people trained in what, with what kinds of competencies, we need, much less what we need if we do it right.  I think the need is urgent.

We need to talk about where people ought to work, and what kinds of people, multiple different levels of competency and expertise working together in a multidisciplinary team -- oh, yes, competencies.  We need to drive competencies by work, and we haven’t even defined all of the components of the work in driving these sentinel systems.  So as you listen today about that, be thinking, “All right, so if we drove those, who would I want driving them?”

One of my favorite analogies -- one of my favorites from Rich Platt, but another one from Dr. Hershel Jick -- is the marvelous Carnegie Hall concert, with a Steinway grand on the stage, a Mozart score teed up, and a chimpanzee sitting at the piano.  We can’t afford that, not in the year 2007.

If we can agree on what the competencies that we want these people to have are -- and we have not yet agreed ‑‑ then we need to go about training, because training to competency is a critical step here.  The CERTs, understanding the “E,” the education that is our middle name, surely do understand that.  But we do not have the horsepower to do it, nor even the centers of excellence to train people to do this Sentinel Network work.

If we agree that that must be done, then we need to be thinking about how you would know a good one, where you would like to designate them, certify them, and if you did, continuing to educate them as this sector bounces forward rapidly.

Finally, of course, who would do the training, when we have no faculty trained in this area either?

So I would end this little plea with a want ad.  I love the IOM report.  I hope you did.  I think that the IOM might do for this what it did for the future of the public’s health.  For the future of the public’s health, there was a companion study called Who Will Keep the Public Healthy?  I think probably we need an IOM companion study about the future of drug safety researchers, because I fear we don’t know where they will come from.

So the theme that Rob would have struck and that I will strike is that we need key partners -- industry, government, and the public.  But certainly for this to work, what we need is a stronger academic infrastructure.  I would look to the CERTs to build that.

Thanks.

DR. SHUREN:  Thank you.  Are there questions from the panelists?

You pointed to a great need for training.  Folks have talked about the need for analytical capacity, and not just in the realm of software, but, mercifully, the human race can’t be replaced quite yet.  Therefore, for that analytical capacity on the human side, we need those people.  As you said, it’s a rather daunting challenge.

What would you really see as the critical next steps?

DR. TILSON:  Great question.  Obviously, I have thought a lot about it, and so have our colleagues in the CERTs and the International Society for Pharmacoepidemiology, which may be the forum in which this needs to happen, and certainly Pharma.

The way industry has approached this, and FDA, so far is the structured apprenticeship.  That is to say, the industry has created -- there are several examples of it -- sponsored fellowships in the area to entice scholars, particularly public health-trained scholars with epidemiologic backgrounds and skills, into this corner of epidemiology -- a critical one.  Pharmacovigilance and sentinel surveillance and all are, after all, critical public health functions, the first of the 10 essential services of public health, and they represent a wonderful place for us to recruit.

But then we are going to need to train them, probably on the job, probably with structured training programs, in industry and the agency and the CERTs.

So my first step would be for the commissioner, probably jointly with Pharma -- and I know AHRQ has great interest in this as well -- to put together an aggressive postgraduate training program where we designate lead scholars -- we don’t have very many of them, but we can find them and name them -- and have every one of them develop the next three.  That is the only way for us to break the logjam.

DR. SHUREN:  I will say, Dr. von Eschenbach has expressed a very strong interest in terms of a greater number of fellows at the FDA, as an opportunity to both train and provide expertise in other areas back to the agency.  There probably will be more discussion on that point.

I would ask for tomorrow, too, for folks on the panel to address that question as well:  What may be the interest of folks who are in academia or elsewhere, who are along in their careers, to bring something to the table that maybe are looking for greater training that places like FDA, and maybe elsewhere, could provide?  What might be that level of interest?  What kind of structure would we need to sort of incentivize and get those folks in the door, particularly when you have a world where the federal government can’t really pay the freight for those folks to come on through?  So it is something we would be interested to hear about.

DR. TILSON:  Thanks.  I would really welcome that.  I would also commend to those thinking about it, both now and over the next couple of months, the experience of the Wellcome Foundation, which from 1985 to 1995, had a series of Wellcome scholars, several of them now leaders in the education and research in therapeutics world, including Dr. Platt himself, who was a distinguished Wellcome scholar.  So the private sector can do some things, with the right mandate.

Also NIH, as you know, has a superb opportunity to provide funding and support for training -- particularly for those collocated CTSAs with a CERTs, an opportunity to pool funding.

DR. WOODCOCK:  I have a question.  Hugh, one of the lacks that I see, which is also at a basic level -- you are talking about basic competencies and so forth -- another basic ingredient that I think is lacking, which I would be interested to hear your perspective on, is that clinical medicine lacks a coherent set of key terms, if you will, for disease or symptom or anything like that.  I am a rheumatologist and I can tell you that in my field there is great interest in nailing things down a little bit better.  But again, I would think that the translational research units would be a good place to do that.  I know that at Duke they are doing a couple projects, on tuberculosis and some other disease.

You can use all the natural language interfaces you want.  Unless you are able to conceptualize every way a physician could possibly describe something, you are not going to capture everything.

So it seems to me that that is another basic academic endeavor that needs to get started, because there isn’t even agreement among some specialty groups about how you characterize any given symptom or disease.  Or am I totally off-base?

DR. TILSON:  First of all, you know you are not totally off-base.  Second, you know me well enough to know that if you were, I would say so.  You are right on the money.

Let me take off all those hats and put on a different one -- namely, as past president of the American College of Preventive Medicine.  One of the great dilemmas in America is what a terrible job we do of training physicians, nurses, pharmacists, and dentists in preventive medicine, preventive dentistry, and so forth.  All of this notion of terminology, after all, should be driven by the epidemiologic understanding that if you don’t name things right, you can never understand them.  So you should be looking to the epidemiology/preventive medicine world to take this lead.

It’s difficult, particularly in this era where medical education is so underfunded, and particularly with departments of preventive medicine having so much trouble getting proper funding, to get the proper leadership within the educational enterprise to do it.

So let me take off that hat and put the CERTs hat back on.  Therefore, at very least, the CERTs, with “E” as their middle name, and Anne Trontell chairing the education committee for the CERTs in our partnership with AHRQ, have been looking at some of the curriculum questions.  That is, if we don’t create the demand function among physicians to get it right, it won’t happen.  So we need to educate, particularly in continuing education, our medical, nursing, dental, pharmacy colleagues about what it is they ought to demand in terms of rigor, both of themselves and their partners.  I think that needs to happen in undergraduate medical therapeutics education curricula.  At least that is where I would start, and the CERTs are already starting there.

DR. WOODCOCK:  I would suggest also that I think the CTSAs -- that what Duke is doing is a very good project, but it needs to be replicated across so many other diseases.

DR. TILSON:  That certainly is true.  It certainly is true that we have to start wherever we can start.  That is, there are many places along this continuum where, if we can get a hook, we ought to put the hook in and just start.

But retraining physicians is not the way for us to do it.  We need to do it right the first time.  Although we will learn a great deal from continuing education and postgraduate research, the place for us to plow this back is getting the next cohort not making the same mistakes that ours did.

DR. SHUREN:  Thank you.

Dr. Chute.

 Agenda Item:  Presentation by Christopher Chute


DR. CHUTE:  Good morning.  It’s a pleasure for me to be here.  I’m Chris Chute from the Mayo Clinic.  I chair the Division of Biomedical Informatics.

I wanted to give you some tradition and context.

Arguably, for those of you who have been to Rochester, Minnesota, you might question why the heck it grew out of the cornfields.  It is, after all, in the middle of nowhere, quite literally.  If you go, you recognize that the quality of care, which actually has evolved over a century, ain’t bad.  Part of the reason that it isn’t bad is because in 1907 -- literally, a century ago; we are having our centennial -- we invested energy in organizing our patient records structurally, a paper database.  Here is our high-volume data store, otherwise known as a medical records room, that gave us access, sequentially and intelligently, to what it is we were doing to patients, so that we could study and improve outcomes.

Here is our sophisticated indexing system, otherwise known as a five-by-seven card, which had the advantage of working in 1907 to provide a realistic, functioning index to diagnosis and concepts.  Here is our patient identifier very carefully inscribed on one of these cards as a clinic number.

So all the issues of database structures, concepts, vocabulary structures, indexing infrastructure were initiated, albeit with paper tools and mechanisms, and the famous punch card, of course, that came in in 1928.

But this is our paradigm, where we focus on patients first and foremost.  From that, we generate new biomedical knowledge.  This, after all, is the practice of academic medicine.  We are hardly unique in that respect.  We learn from that activity and reinsert that knowledge into the patient care.  So we have this samsara of continuous improvement that has been going on for a century.  That is the point.

There are a lot of standard activities around that cycle -- data inferencing, knowledge management, decision support.

Dr. Woodcock, what I usually put in the center, because I am a shameless vocabulary ontology guy, is the importance of ontologies and vocabularies.  But I thought I would be a little more holistic today and put informatics as the glue that binds that process.

There are, of course, organizations that are beginning to coordinate and emerge in leadership roles -- AMIA comes to mind, the American Medical Informatics Association -- as a consolidating home for that type of intellectual, academic activity as we evolve into a more formal process.

This isn’t 1907 anymore.  This elegant slide that I could spend the next four hours describing to you is our individualized medicine strategy.  In a nutshell, it is taking characteristics about a patient -- albeit layering on now pharmacogenomic and genomic characteristics on that process -- so that we can tailor medicine effectively.  But underpinning that is metadata standards, ontologies, and informatics as we move forward.

We have consolidated this information into a warehouse.  We don’t call it a warehouse, of course.  We are too refined to do that.  We call it a data trust.  But it’s a warehouse.  It uses the standard techniques of extraction, transform, and load.  Marc Overhage, of course, does that for a living these days -- and very well, I might add -- in the Indiana population.  It is the same generic notion of taking information across a network, in our case, of practices in Minnesota, Florida, and Arizona, together with a five-state regional outreach, to create, if you will, a Mayo-centric warehouse of this activity, replicating the principles that we have nurtured for a century, with natural language processing.

In 10 minutes, there is only so much we can cover.  I have chosen to focus on the natural language processing agenda.  That has come up a number of times in this little set of presentations.  We have used it, for example, in our parsing of medications, so that we not only capture medications that aren’t necessarily picked up in the prescribing databases -- obviously, we have the standard access to prescription information -- but there is a lot of meds going on that are mentioned in the notes that constitute exposure.

The intellectual force behind our NLP infrastructure was Sergey Pakhomov, who is now at the University of Minnesota.  I think he is in the room, actually.  He, for example, published a paper where he looked at status information associated with drug exposures ‑‑ stop-stop-change types of events associated with that -- and, obviously, the other side of the coin, which is diagnostic or sentinel events being tagged with ontology representations in SNOMED codes.  We all recognize that there is only so much one can do with reimbursement coding.  The flaws -- I think many of us have written widely about that.

The point is, we have the capacity for longitudinal tracking of events and sequences and sequelae in our patient populations that lie below what I would characterize as the radar screen of ICD codes and reimbursement codes.  The capacity for association studies, of course, is quite large.

Just a little shameless plug here.  This is an open-source resource.  We convinced IBM, actually, to release their unstructured information-management pipeline, which is now available under Eclipse License and OASIS standard.  Our current implementation has 16 annotators, most of them available as open-source and the rest can be modified for open-source.

But the point is, if we are going to talk about synergizing infrastructures and informatics, having a commodity, scalable, community-developed, open-source resource for sophisticated NLP processing and ontology mapping would be a handy thing to have.  We are offering a beginning point for that sort of thing.

We are also cognizant of the genomic era.  We all know this.  But the point is, the semantics used among the community of basic biology researchers and the semantics used in the clinical community are not the same.  We are one of the founding members of the Pharmacogenomics Research Network.  Dick Weinshilboum was the founding chair of this.  Of course, this is yet another emphasis that we all recognize.  The Vioxx example may be a good one.  It could potentially be an enormously useful drug for a very large population of patients and yet dangerous for others.  Teasing that apart at a molecular biologic level is where we want to go.

But here is our semantics problem.  The red line represents back-of-the-envelope activity by the bioinformatics community; the green line is back-of-the-envelope activity by the clinical informatics community.  What we have, of course, is a chasm of semantic despair, where those guys use the gene ontology and we use SNOMED.  How do you fit that together?  That is what the National Center for Biomedical Ontology, of course, is set up to do, one of the BISTI roadmap grants at Stanford in which Mayo is one of the core participants.

Enter the CTSA community.  We heard about the SPIN network.  One of its logical follow-ons was, if Dr. von Eschenbach were here, the caBIG community, the bioinformatics grid.  This has the potential to evolve just cancer or pathology to the whole spectrum of science.  I think that is the vision of CTSA.

This language was taken shamelessly off their Web page.  There is nothing here you don’t know, but simply to say that Mayo, of course, is one of the founding academic centers of CTSA.  The ability to bring informatics science infrastructure to these questions may not solve the Guillain-Barré syndrome, simply because that is a rare event, but there are lots of drug disease outcome events that are more common that smaller networks -- ideally, network of networks, as Rich and Ken were pointing out, in using grid technologies and caBIG technologies and CTSA technologies.

What can something like a CTSA network do, assuming one were to evolve effectively, analogous to the vision that has been articulated in caBIG, but promote shared standards of clinical repository design and implementation?  We all build our own warehouses.  That is the problem.  We all build our own warehouses that have standard models, vocabularies, ontologies that support aggregation and surveillance in a scalable, interoperable way.  “Interoperability” is the key word.  Many of you who know me -- or, for that matter, Marc or Clem, if he were sitting there -- know that we are very, very active in the standards communities -- and deploying NLP resources, in a scalable, commodity, open, accessible kind of way, that are suited to use case.  Open-source is important, not simply because of the cost factor, but, quite frankly, because of the configurability, to be able to adopt those tools to local environments very, very effectively, and, of course, leveraging the pharmacogenomics agenda, which clearly would be pertinent to any kind of Sentinel Network activity.  Doing bland one-size-fits-all kinds of outcome determinations of drugs is not the state of the art and is not necessarily particularly useful.

I beat my time, I think.  Thank you.

DR. SHUREN:  Let me ask if there are questions from the panelists?

MS. CRONIN:  Chris, I was wondering if you could comment on an open-source model and how that might be used by both industry and public health agencies?

DR. CHUTE:  That is an excellent question.  Of course, it’s raging in the cancer informatics community.  The caBIG isn’t an effective model.

The bioinformatics community has leveraged it enormously well.  Virtually all software that is used in genome scanning and genome interpretation, microarray analysis -- the usual suspects -- is open-source, is collaboratively developed, is widely used.  It spares the research community the chore of reinventing the wheel, put quite simply.  Given that the amount of funding that is available in the research community is not likely to increase dramatically, trying to impose market conditions on commodity software needed to do the jobs that we need to do, both for surveillance sentinel networks and related types of activities, is probably not going to be a feasible undertaking.  There is probably not enough money left in the system.

But most of us want to get the job done.  The willingness of most members of the academic community that I bump into -- clearly in the caBIG community, clearly in the CTSA community, the standards community -- for open, interoperable standards, specifications, and, oh, yes, implementations as software, has a fairly high degree of enthusiasm.  I think there are components that, with nurturing and sustenance, could emerge as commodity tools that would be relevant to the Sentinel Network, not to mention the NHIN (but that’s another story), that would serve the public and serve medicine and biology, understanding care improvement.  All the motherhood types of notions that I think we share could be greatly accelerated with the advent and availability of sophisticated open-source software.

There is ample precedent for very high reliability -- the Apache servers and Tomcat servers on Web nodes and the like.  Ninety percent of the Internet is built on open-source software, and it works.

DR. SHUREN:  Other questions?

DR. WOODCOCK:  I have a question about that.  Would it be necessary to assemble a consortium that could do this development, or is it only necessary to provide money for existing consortia?

DR. CHUTE:  Just send money?  That’s always desirable.

It’s consortia of consortia, really.  What is absent -- we have, as many of us are painfully aware, the evolution in ANSI of the HIPP, the HISB.  Now it is the health information technology standards.  All of those are clinically focused.  Indeed, our colleagues here from ONC are only beginning to grapple with the question of, to what extent can standards and interoperability be nurtured and focused in the research community?  What would be needed to accelerate and foster that?

In fairness, I don’t think it’s huge amounts of money.  But it is clearly as much coordination and alignment as anything else.  It begs, what would be an optimal form?  Whether the CERTs could function that way is a fair question.  Whether CTSA could function that way is a fair question.  We could go down a long laundry list of organizations that could function that way.  I think at the end of the day, we are really talking about a meta-organization of research infrastructure.  Whether that is led by the NIH roadmap, whether that is led by a federal consortium -- frankly, whether that is led by the Europeans or the Japanese, who clearly are more invested in this, with their GridWorld in Europe -- is an interesting question.

DR. SHUREN:  Thank you very much.

Dr. Rudolph.

 Agenda Item:  Presentation by Barb Rudolph

DR. RUDOLPH:   Good morning.

I am going to bring us down to, probably, more of the real world and talk a little bit about the pond, what it’s like out there as a purchaser of health care.

I wasn’t even going to mention this slide, but I think we have gotten away from the problem.  The problem really is that we are not getting the kind of quality of care that we need and that we are actually paying a lot for.

This cost of poor-quality care, whether it’s care delivered by hospitals or whether it’s problems with medical devices or the wrong medication, is really costing all of us a lot of money, and also a lot of pain and suffering.  I just want to remind us all of that before I go on.

The Leapfrog Group was started in the year 2000, with some money from the Business Roundtable and also Robert Wood Johnson Foundation.  We tried to focus on some very simple concepts:  To look at providing information for those folks who had to make health-care decisions.  Whether that was for the consumer or for purchasers, we wanted to address both issues.  More recently, we added a new mission, and that is to address the idea of high-value health care through incentive and reward programs.

Our membership -- I am not going to read through it.  There are a lot of Ms, a lot of Gs, a lot of Caterpillars, Ciscos, cars, shipping, but also a lot of state purchasing groups, the Los Angeles Employee Retiree Association and others, the Wisconsin Employee Trust Fund.  We have partners who help us with certain aspects.  We have now covered lives of 8.3 million employees, spending almost $60 bill a year on health care.

What do we do?  We try to educate and inform enrollees.  We try to compare providers openly.  We try to reward superior provider value.  That includes things like public recognition, primarily, and some volume shifting to hospitals and other groups that do high-risk procedures.

We think there are really things that are critical (I think this might actually work for the FDA as well):

  • We need to increase the amount of transparency in the health-care delivery system.
  • We need to standardize our measurements and our practices in terms of the type of data we collect, the types of measures that we use.
  • Then we need to incentivize folks to do the right thing.  Up to now, we are not doing much of the right thing.

We have a focus on four areas:

Computerized physician order entry, which is probably the most relevant to this discussion.  After about four or five years, we have developed an evaluation tool that evaluates how CPOE systems are actually implemented.  They don’t come to the providers completely done.  The providers have to make a lot of decisions about alerts and other kinds of things to determine how that system should function in their particular hospital or physician office.  So we now have a new tool.  It is the only one of its kind.

There are no tools to evaluate how well hospitals or physicians implement EHRs, for example.  There is nothing like that available.  There are just now some standards being developed by SEECHIT [phonetic].

We also look at staffing issues.  I won’t go into details.  We look at experience as being an important factor in providing quality of care.

Then we have a set of safe practices, which include the 30 indoor safe practices by the National Quality Forum.

We have a voluntary system of data collection, a hospital survey on quality and safety issues.  We had as of February, for the last survey cycle, 1,334 hospitals who responded voluntarily.  That covers quite a wide area of the country.  I think it’s about 48 states.  We have a system of motivating hospitals to participate by engaging a set of regional rollout leads who are actually large employers and/or large purchasers of coalitions of purchasers.

We have 33 regions at this point.  Our regions are those in green.  Health care is a local mechanism.  In order to get health data, there also has to be sort of a local mechanism.  You have to have incentives in place at the local level.  I think that will be critical also for the FDA to think about:  How do you reach out locally to make sure this information comes into your organization?

We have taken an approach that we want to use all measures that are NQF-endorsed.  That is, they have gone through a consensus body -- again, talking about that open transparency -- in terms of having lots of different folks being able to give input as to what things should be measured.

Then we have an incentive and reward program, which is a national program.  I always laugh.  It says “turnkey.”  It’s probably about as turnkey as the CPOE systems are.  You have to make a lot of local adaptations.

I am going to stop talking about Leapfrog and I am going to address a couple of things that were the questions, if that’s okay.

In addition to working for Leapfrog, I also do consulting for the National Association of Health Data Organizations, and I am a former bureaucrat in a state.  I ran a health data agency, where we collected 26 different population-based data systems, including vital records.  Actually, no one has mentioned that today as a possible source for information, but death certificates certainly are a potential source for information.

I also worked at the University of Wisconsin-Madison, where we have in our research center the MDS, which is the Minimum Data Set for nursing homes, which is by no means “minimum.”  It has lots of clinical data in it.

I want to echo some of the things that were said about data systems that are out there that we might make use of, from my experience.  Medicare data was mentioned, Medicaid data.  The MDS is a potential, death certificate information, some of the registries that are funded by the CDC and others in states, state discharge systems.  There are now 38 states that have hospital discharge data available.  It is put to a lot of different uses.  In many cases, it is available as public-use datasets.

There are issues, obviously, with any type of large data system.  There are issues related to the codes that are available.  There are issues related to the privacy of the data.

There are issues related to the staffing, which was brought up, too.  In order to work with these very large data systems, you really have to have a really good understanding of the data.  You don’t get that over multiple data systems.  You get that by working with one data system for a really long time.  Yet what is really important is integration.  But there are lots of barriers to integration, one being that most individuals working in data analysis and collection are experts in one area, in one data system.  When you begin to cross over, you have some issues, because you don’t really understand the data well enough as an analyst to fully appreciate some of the nuances of the data.

You also have proprietary issues.  No one has mentioned this.  If I collect a particular type of data, I am the owner of that data, whether it is a discharge system, whether it is a death certificate system.  Getting those individuals who are data collectors to actually share data is no small task.  In fact, I have done a white paper through NAHDO on the barriers to data integration across systems.  This is not an easy thing to do.

There are some examples where it has worked.  NHTSA, which is the National Highway Transportation Safety group, has a project called CODES that has put together motor vehicle crash data, discharge data, ambulance run data, and a few other pieces, some emergency room data.  They have been able to look at problems with specific motor vehicles as a result of this.  Actually, a lot of the research that was done on airbags and other kinds of safety devices in vehicles was the result of this particular integration of data.  No small task.  They have 18 states now, I think, collaborating on this data system.

But this is not easy stuff.  I sort of have this sense from the prior speakers that this is going to be somewhat electronic and easy.  It’s the people factors here that are going to be really tough, I think, more than the electronics.

I will stop there.

DR. SHUREN:  Thank you.  Questions?

I will ask you a question.  You sort of have emphasized, I think, the need to have the right incentives to drive folks to engage in the right behaviors, maybe using reimbursement as a means for getting there.  One question is, what so far has the Leapfrog Group done, or may be interested in doing in the future, to sort of help promote those incentives for folks investing in the right kind of infrastructure for data collection, to actually provide data -- we keep hearing that as a big issue, even getting people to provide the right kind of data, and if there is an agreement on terms, to sort of use that -- and then, lastly, for sharing data?

DR. RUDOLPH:  A couple of things come to mind.  We have been working with a number of other large purchasers and plans and so forth to get everybody kind of on the same page in terms of pay-for-performance and have been doing a number of workshops and training on pay-for-performance.  We have also been sharing the data.  There are a number of different groups that have taken our data from our national program and from our survey, and have used that to assist in their own efforts at providing information either to consumers or to the providers themselves.

We have also been working very hard at the idea of standardization.  We are participating in the National Quality Forum.  We have part of the Purchaser Council, which is actually a pretty small council compared to some of the other ones.  For purchasers, this is not their full-time job, so it’s hard to get employers engaged in the process of looking at specific measurement methodologies and so forth.

Those are the things we have been working on.  We have also been working with health plans, because they are the implementers of our policies, and getting them to take them on also and have them be part of the process.

So there are lots of different ways to approach this problem.  I think that’s what it is going to take to solve this issue, too.  You are going to have to do a lot of different things simultaneously and a lot of practical things.  Getting people to share their data is no small task.

DR. SHUREN:  Thank you very much.

Dr. Caldwell.

 Presentation by Michael Caldwell

DR. CALDWELL:  Good morning.

What I would like to talk with you about this morning is a construct that has been put together from some discussions with Kathy Giacomini at the PGRM, the Pharmacogenetics Research Network, with Russell Teagarden at Medco, with David Page at the University of Wisconsin, and with our group at the Marshfield Clinic.  It is not a working construct, but it is a construct of working parts.  Hopefully, it will provide us a path through the forest of developing a true postmarketing pharmacovigilance system.

From a conceptual standpoint, we looked at the requirements for this type of system as one that starts out with an electronic definition of adverse drug reactions.  That is not trivial.  When you are working with electronic clinical data, defining a true ADR phenotype takes some doing.  Then we feel that there needs to be an ongoing population-based estimate of the incidence of ADRs, so that if you are looking at your records, as a new drug is introduced into a population, you actually have the ability to electronically monitor these health events as the drug comes into a large population, and you can compare the incidence of ADR phenotypes in the population that is taking the new drug with your reference population.

So those pieces put together, we think, give you a way of actually establishing an increase in the incidence of an ADR phenotype associated with a new drug.

To start that off, again to try to define a true ADR phenotype, we have already had discussions and some work with the Pharmacogenetics Research Network, which has been mentioned earlier and is outlined here.  It is spread across multiple academic institutions around the country.  It receives a major portion of its funding through the National Institutes of Health -- essentially all of its funding through the National Institutes of Health -- and is involved in multiple areas of pharmacogenetics.  But it has had as one of its primary functions to begin to establish phenotypic criteria to ascertain cases of adverse drug reactions, with interest in drug-induced liver toxicity, statin-induced myopathy -- for instance, drug-induced renal toxicity -- and cardiac events as well.

These are the co-chairs.  Their primary goal for this project is to facilitate studies of genetic risk factors for ADRs.

I will give you some examples of criteria that are used, for instance, in the ascertainment of rhabdomyolysis, not only using standard discharge diagnoses, which, as has already been talked about this morning, is incomplete, to a large extent, because ICD-9 is lumpers, not splitters of data, using other admitting diagnoses, but then adding to it other attributes, such as laboratory values that can be meaningful.

The second piece is looking at an ongoing population-based estimate for the incidence of ADRs.  To that extent, there is a resource that we have in Marshfield and there are others around the country of epidemiological study areas that I think could be extremely useful.  The one that we have in Marshfield has about 80,000 people, a subset of our 400,000 unique patient population.  These people have been very participatory in studies.  We do not have a lot of ethnic variety in central Wisconsin or in the northern part of Wisconsin, as you might imagine.  Most of our population is northern European.  It is very helpful for genetic studies.  However, the universality of findings needs to be tested in other areas.

The unique aspect of the epidemiological study area is that we capture essentially all of the health-care events of this population, and have for decades.

By using such an epidemiological study area, it allows us to accurately evaluate population-based incidence and prevalence of disease or, in this case, a new phenotype, which would be defined as an ADR phenotype.  Then, using a system that we have which is called a syndromic surveillance system, which runs in the background and is looking at a defined phenotype, and then looking at new instances of that phenotype occurring in the electronic medical record -- and since our electronic medical record monitors all events of patients on a daily basis, it allows you pick up on clusters of information signals, if you will, fairly readily.  It can allow us to detect a change in the incidence, as pointed out earlier, and show us how that differs from the incidence in the reference population.

This gives you an idea of the distribution of health care for the Marshfield Clinic.  As I said, we have about 400,000 unique patients.  If you draw a line through the middle of Wisconsin, we provide care to many of the patients above that line.  We have 41 different regional centers, all completely integrated through an electronic medical record, with a very effective data warehouse for data retrieval.

The electronic medical record at the clinic started in the 1960s.  Since 1975, it has been picking up both inpatient and outpatient records.  All of those events have been tracked electronically since the early 1980s.  Currently, it tracks all health-care activities, including clinical data, family medical history, patient medical history, labs, procedures, et cetera.  It’s event-driven.  It’s updated with each new medical appointment that the patient makes.

An electronic medical record is only valuable if it used.  Ours is used as part of health care on a daily basis throughout the entire clinic.

In monitoring the over 1.8 million patient visits that we had in 2006, one of the other strengths is the fact that our electronic medical record doesn’t just use ICD-9 and CPT-4 codes, but has departmental lexicons -- appropriate to comments that were made earlier -- that actually were designed by physicians to be much more specific as far as true diagnoses and events taking place with patients, not just relying on standard ICD-9 coding.

Here is the way the distribution works out.  Out of the 400,000 unique patients that we see a year, we have about 80,000 of those that are captured in MESA, where we know that we capture all of their health-care events.  Of those, we have a subpopulation of about 20,000 patients in our Personalized Medicine Research Program, where we have DNA, serum, and plasma, and their permission to access to all of their health-care records for studies in the area of pharmacogenetics, genetic epidemiology, and population genetics.

But even with that system, it is clearly, as has been mentioned today, not large enough to pick up events as far as early diagnosis of adverse drug reactions.  As a consequence, we need some ability to electronically monitor health events as a new drug is introduced into a large population.  A concept that we want to talk about, which hasn’t been talked about earlier, is to use some of the prescription-based systems that do see large numbers of prescriptions on an ongoing basis as a way of perhaps picking up those signals.

This comes from some concepts from Russell Teagarden at Medco.  Medco administers public and private employer plans of all different sizes -- health plans, labor unions, government agencies, and individuals served by Medicare Part D.  It serves approximately 60 million lives.  The prescriptions filled by Medco mail service in 2006 were about 89 million.  They process claims of about 550 million on an annual basis.  They cover about 60,000 pharmacies.

So their concept was that by looking at prescribing practices as new drugs come into the marketplace and looking at other prescriptions that occur along with the new drug that is in the marketplace, you may very well be able to pick up a signal -- coarse though it may be initially -- of adverse drug reactions that are occurring very rapidly because of the huge volume of prescriptions that are being monitored.  These are just examples of types of things that could be used in that situation.

Once you have picked up an idea of a signal, you could translate that to a system that would allow you to better define the system, such as the electronic medical record of the clinic or other groups that could do that.

Another important piece to that is to add machine learning as a way of effectively analyzing the data that you are seeing, and also predicting differences in populations that may be receiving the drug.  This is something that we have been using successfully so far with our electronic medical record and David Page at UW, to use machine learning to begin to separate the characteristics of individuals who are on medications and identify complications of medications early in the process.

I am just going to skip to the end for the concept, just because of time.  This is, in general, the point that we are talking about.

You would use something like a Medco, some signal-generating institution handling huge numbers of prescriptions daily, which would develop an electronic surveillance system that would detect a signal of a problem with a new medication.  That signal, and the means for developing that signal, could be enhanced through machine-learning technology.  That signal and the definition of a true ADR could be determined through help with the Pharmacogenetics Research Network and others who can help define the ADR.

Notification of the signal comes to places like the Marshfield Clinic, the HMO Research Network, other institutions that can delve deeper into the clinical data for these patients and, with already a good understanding of an ADR phenotype, be able to validate the signal.  Then, when you have availability of plasma and serum and the like, you can begin to look at different shading biomarkers for the individuals who have or have not developed that ADR.

I said that very quickly, but I have some time for questions.  If there are some, I can try to answer them for you.

DR. SHUREN:  Thank you.  Questions from the panel?

As you were pointing out -- and I know Dr. Woodcock had raised this, and some others -- what is important is not just simply identifying that there may be particular adverse events associated with certain medical products, but what, in fact, the genomic underpinnings for those adverse events are.  Maybe we can actually identify people at risk, and also maybe have a better understanding and be able to predict, in fact, for products down the line what risk they may actually pose to individuals.

What we hearing is that, obviously, there is a need for collecting samples, DNA material.  What steps are actually being taken to ensure the privacy, that that information is protected and not being linked back to individuals, that that information may not be made public?

DR. CALDWELL:  I can tell you our own, in the database that we have, the Personalized Medicine Research Program.  We have a certificate of confidentiality from the NIH to protect the individuals.  Wisconsin, fortunately, has state laws that eliminate discrimination based on genetic information for employment or insurance purposes.

Our genetic data are kept in a completely separate database that is not physically attached to our clinical database.  For the people who use the genetic database, the information is de-identified between the two databases and then recoded, so that people who are working with the genetic database cannot identify the individual associated with that genetic information.  The people who work on the genetic database do not have access to the clinical database.  So you can’t go back and try to figure out who the person was that you found something in.

So there are a number of different steps in that regard to actually ensure the patient’s privacy.

The clinic itself established completely new privacy rules for genetic privacy as part of this project as well.

But collecting samples by themselves is obviously not the end result.  The samples have to be tied with deep, rich clinical information to be of any value at all, in my mind.  That has been one of the problems with many of the samples that have been collected so far in clinical trials and the like.  They are collected for a very specific, specified study, where only specific amounts of clinical information are available.  So the utility of those samples becomes somewhat limited as a result of that.

Collecting samples in conjunction with rich and, to the extent possible, complete clinical information is critical.

DR. SHUREN:  Along those lines, have you seen any barriers -- there are a lot of protections out there in terms of what you can actually -- for use of samples.  Have you seen, on the one hand, any barriers from regulations that may be out there?

The flip side is, are there insufficient protections that may be required to ensure that there isn’t any misuse or identification of individuals [sic]?

Institutions can always impose their own practices, above and beyond.  But do you think the current regulatory framework is both sufficient, on the one hand, to protect individuals, but on the other hand, of sufficient flexibility to ensure that important uses, like research to help aid clinical care, are also built in?

DR. CALDWELL:  That is a difficult question to answer.  The samples that have been collected, in many cases, have been collected for totally different reasons -- in general, in most cases, for research studies.  They have not been addressed, to a large extent -- and certainly not successfully, in most cases -- on a national basis, to establish federal guidelines along those lines.

One of the ones that there has been a great deal of interest in having is just a federal guideline that prohibits discrimination based on genetic information for health care and insurance purposes, which would be a great deal of help in these areas.  It is one of the areas that patients are concerned about.  Before we started our study, we did focus groups, many of them, to try to find out what patients’ real feelings were about having their genetic information being evaluated and their clinical information being evaluated simultaneously.  It gets pretty close to where you live when you do that.

If you put enough safeguards in place, they feel very comfortable about that.  But they have to trust the institution and they have to trust your safeguards.  To a large extent, it becomes local, at this point, because we don’t have a lot of federal protection.

MS. CRONIN:  You mentioned early on in your presentation the need for a definition of an electronic adverse drug reaction.  I am wondering, beyond the standards-development work in that area, whether or not you think there is a common working definition and whether or not anyone has done any work on identifying data elements that would be necessary for certain serious adverse drug events that might be of most interest to public health?

DR. CALDWELL:  I think that work is ongoing.  In certain areas there are effective phenotypes that have been identified.  A collection of those and a validation of those in most cases still are lacking.  That is one of the reasons that I think it’s an important construct to anything we put together here.  If we are going to be working with electronic information, defining an electronic phenotype is not trivial.  Because of that, the noise that is generated just in the phenotype itself, when you start talking about comparing data across multiple different platforms, with the added noise that is there, it becomes very difficult sometimes to come to a valid conclusion.

DR. SHUREN:  Thank you very much.

Liz Paxton.

 Agenda Item:  Presentation by Liz Paxton

MS. PAXTON:  Thank you very much for the opportunity to present our experience in developing a postmarket surveillance system for total joint replacement implants.

First of all, I will start off with a brief background on total joint replacement within the United States, followed by a discussion of the importance of establishing a postmarket surveillance system for total joints.  Then I will share with you our Kaiser Permanente experience in the development and implementation, cost savings and changes in practice, expansion within and beyond orthopedics, and the key elements for success in the development of a postmarket surveillance system.

Each year in the United States, we perform approximately 600,000 total joint replacement procedures.  For 2030, the projected volume of total knees is 3.48 million per year and an additional 572,000 per year for total hips.  The annual hospital charges projected for 2015 include $17.4 billion for primary total hips and $40.8 billion for primary total knees.  In addition, revision procedures will contribute an additional $3.8 billion for hips and $4.1 billion for knees.

Despite the high cost and volume associated with this procedure, we currently do not have a mechanism to identify revision rates within this population, to identify patients at risk for failures and revisions, and to assess postmarket implant performance.

We also do not have a mechanism to identify and notify patients with recalled implants.  This is critical.  There have been numerous situations in which implants have failed and have been recalled, resulting in significant pain and suffering for patients.  For example, in the late 1990s, there was a hip stem failure, resulting catastrophic levels of pain and suffering for patients.  Those individuals are still presenting currently.

In 2000, there was hip cup that was recalled.  Over 17,000 patients had that implant.  That recall, in particular, was related to traces of machine oil on the cup, again resulting in significant pain and suffering for patients, who had to come back in and have that procedure revised.

Although we do not have a current mechanism in the United States for monitoring implant performance nationwide, there are several existing national registries.  For example, the Swedish Hip Registry was established in 1979 and has over 3,000 total joints registered.  They have focused on a minimal dataset, consisting of patient, implant, and technique.  They have been able to reduce their revision rates for total hips from 8 percent to 4 percent by providing timely feedback to surgeons on specific techniques.

Considering the quality and patient safety issues, as well as the high cost and volume associated with this procedure, we developed a total joint replacement registry, or postmarket surveillance system, at Kaiser Permanente.

Kaiser Permanente is the nation’s largest nonprofit health plan, an integrated, prepaid health-care delivery system with 8.6 million members.  We have over 12,000 physicians and over 150,000 employees, and we serve eight regions.  We have 37 medical centers and 431 medical offices.  We perform over 14,000 total joints each year.

In 2001, our orthopedic surgeons initiated and developed a total joint replacement registry.  The registry consists of standardization of total joint documentation.  All physicians use a standard preoperative form at the clinic when a patient comes in for a preoperative visit, to capture baseline information.  We also have a standardized intraoperative form that captures information on the implant characteristics, the technique, and the diagnosis.

In addition, each time the patient returns to the clinic, we capture complications and status of that patient in a follow-up form that serves as the physician’s progress note.

This contributes to a database of all total joint procedures consisting of patient demographics, surgical techniques, implant characteristics, and outcomes.

In addition to the registry data we capture at the point of care with documentation, we also utilize existing administrative databases, such as our hospital database, OR information, pharmacy, lab, radiology, and cost information, and merge this information with the documentation provided by the registry, for a more comprehensive database.

We have 95 percent voluntary participation from over 300 surgeons at 38 centers.  We have 50,000 total joints registered at this point, and we have five Kaiser Permanente regions participating.

Just recently, we transitioned to the electronic medical health record that we refer to as Healthconnect.  As this system was built, it was a free-format text for progress notes for the physicians.  In order to capture information in a defined field, we added additional functionality.

This is an example of a documentation flow sheet which required additional programming.  This is at the pre-op visit, some of the information the physician enters.

This is an example of the post-op visit, in which we capture complications and any problems associated with the procedure.  On the right there is a pull-down menu that gives you an idea of the multi-select option within this particular system for capturing data.

After information is entered at the point of care by the physician, that information is automatically populated within their progress notes.  This provides an opportunity not only to capture those defined fields, but to add free text as well.

This is an example of what a complete progress note might look like after total joint replacement.

This is an example of some of the functionality provided by Healthconnect in terms of looking at an episode report in which you can compare progress across time for a particular patient.

So that is an example of how we implemented and developed the total joint registry.  Now I would like to transition to sharing with you the benefits of the total joint registry and how we have used this information.

First of all, we now know what our revision rates are.  Prior to the development of the registry, we did not know how well we were doing in terms of total joint replacement.  In addition to knowing our revision rates, we can track implant performance across time.  We can identify the best implants for our patients.

This is an example with total knees in which we tracked femoral performance across time.  This is survival of the implant, with endpoint being whether or not the procedure was revised.  We did see a statistically significant difference with one of the implants and thought there was a problem.  However, after feeding that information back to the surgeons and receiving information with regard to the design of that implant, we identified that it was actually the uncemented technique associated with that implant that was contributing to the higher rate of failures.

In addition to tracking implant performance across time, we have also successfully identified and monitored two recalls and advisories.  This resulted in cost savings in prevention of chart review to identify patients with recalled components.  So the registry provides a mechanism for immediate identification and notification during a recall situation.

In addition to the ability to identify and notify patients during recall and to track implant performance across time, we have also used the registry as a method to provide a foundation for more in-depth research studies.  For example, we have identified certain patient factors that are associated with risk of failure and revision.  Obesity is one of those risk factors that we identified within the registry.  Based on the finding that obese patients had a two times higher risk of postoperative infections, we implemented a multicenter exercise and weight-loss program to determine if we could prevent or delay the need for total knees in obese patients.

We have also identified risk factors associated with dislocation after total hip replacement, and we have identified issues surrounding different techniques.

For example, this is an example of the partial knee replacement survival in relationship to the total knee survival.  We found that the partial knee had a revision rate of 11 percent at two years, in comparison to the total knee, at 1 percent at two years.  After providing that information to surgeons, we saw a significant decrease in the rate of partial knees, dropping from 4 percent to 2 percent, preventing 16 revisions through information sharing from the registry, at cost savings of over $500,000.

Considering our success with the total joint registry, we have expanded within orthopedics.  We have several other additional registries.  Our future plans are to look at opportunities for cardiology.

So what have we learned from our experience?  First, there is a need for a consensus on the scope of the project.  Specific aims and hypotheses will determine data-collection methods.  We focused on a minimal dataset at the point of care in terms of documentation and capture.  This has been successful for us.

Benchmarking was also critical in the developmental phase.  We learned a lot from the Mayo Clinic, from the Swedish Hip Registry, and other existing registries in terms of moving forward in our own development.

Involving physicians and frontline staff in the development and all phases of the project is critical.

We have focused on reducing respondent burden, replacing existing systems, creating standards of documentation, extracting data from existing administrative databases whenever possible.

The incentives for voluntary participation include:

  • Minimal burden of documentation at the point of care.
  • Physician and staff involvement in the development all phases of the project.
  • Timely feedback on techniques, patient factors, and early failures of implants is critical.
  • Physician-specific data provided upon request.
  • The identification and notification of recalls and advisories.
  • Support for research projects.

We have developed a system for immediate identification and notification of recalled components, identification of patients at risk for failures and revisions, and identification of clinical best practice for improving patient quality, as well as providing a framework for more in-depth research studies to provide incentives for participation.

DR. SHUREN:  Thank you.

DR. GROSS:  I thought that was very interesting.  Could you speak a little bit about your interactions with manufacturers?  Two or three times during your talk, you mentioned technique-related issues.  Was that reflected in labeling, in terms of training?  Did you have interactions with manufacturers?

MS. PAXTON:  We have had interactions with manufacturers with regard to specific implants that were recalled in relationship to manufacturing issues.  We have also identified physician-specific variables associated with revisions.  For example, we found with the partial knee replacement that there was a learning curve associated with failures, with 58 percent of the failures occurring within the first three cases.

DR. GROSS:  A follow-on question.  In a similar light, has this grabbed the attention of the Association of Orthopedic Surgeons?  They have been talking about establishing a registry.  Have you interacted with that group?

MS. PAXTON:  Yes.  We have representation on that group.  I believe they met with CMS yesterday, requesting support for developing a level-1 data registry that would have a minimal dataset and capture information nationwide.  We are involved with that process.

DR. SLUTSKY:  Is there any information that you can gather from the 5 percent who don’t participate voluntarily in the registry?

MS. PAXTON:  Predominantly, the physicians who have refused to participate are individuals who are close to retirement, and the chiefs and department administrators have given up on them, unfortunately.  But I really believe 95 percent is a phenomenal rate, because this is additional work on the physician’s and staff’s performance.  It really demonstrates the dedication and unique culture of our organization.

DR. BUDNITZ:  Thank you for this very nice presentation.  You said one of the keys to the success was pretty much consensus and focus around a very specific issue, and a narrow scope.  For the panel, could you tell us how this might apply to the much broader scope of what we are looking at today?

MS. PAXTON:  That’s a very good question.  I think the panel, in focusing on small-scale projects, could benefit in terms of starting on a focused area.  For example, in medical devices, there are a lot of high-volume, very technology-driven implants.  Focusing on those, such as spinal implants, total joints, cardiology implants, would be very beneficial, and then building from that foundation.

DR. SHUREN:  You mentioned that one of the critical aspects for voluntary participation is feedback.  I think we have heard in earlier presentations, too, that that feedback is important.  But one question is, what level of feedback do folks need to feel that they have sufficient investment to go ahead and participate?  What was it in the case of the registry here?

MS. PAXTON:  A lot of the total joint surgeons requested information on their particular practice.  They want to know how they are doing, how they compare to others within their medical center and across the nation.  That was critical in getting the buy-in from the physicians, being able to provide information on their revision rates, as well as providing information on patients at risk for failures and revisions.  That was really key in our success.

DR. SHUREN:  Thank you.  I think we heard the same thing from Dr. Hill as well.  Thank you very much.

Dr. Resnic.

 Agenda Item:  Presentation by Fred Resnic

DR. RESNIC:  Thank you very much.

It’s a privilege to be able to present some of the work that we have done in Boston in applying medical informatics to, primarily, cardiovascular device safety monitoring.  As a practicing clinician in cardiovascular care and in working with the state Department of Public Health efforts in this regard, there really is a coalescence of interest on the part of the providers and the regulating agencies.

What I wanted to talk about, quickly, are the general design principles for an automated safety monitoring system that we have explored; secondarily, to look at the experience in Massachusetts in the cardiac quality dataset, an example of a mandatory clinical-outcomes registry that provides, potentially, additional value and complementary value to some of the datasets we have heard.  I will discuss, with examples mostly, the pilot automated safety monitoring system that we have worked on.

We have seen these slides over and over about, the schematic of taking multiple sources of high-quality data and putting it to a data warehouse.  What I am going to focus on is the monitoring system.  It is a huge assumption to say that one has high-quality data feeding into a data warehouse that is adjudicated and valid, but we will move forward from there.

The monitoring system is a system that would be added to the warehouse, monitoring multiple, simultaneous analyses using an array of accepted transparent statistical/analytic options and built in a generic way so that data structures that may be unique to cardiology would not preclude the system being used for orthopedics or pharmaceuticals or other therapeutic interventions.

Principal to the development of any useful monitoring system is to have some level of expectation and risk adjustment.  These systems must be data-efficient.  We have to account for, as has been mentioned, secular changes in outcomes over time, as well as learning-curve effects.  We have to incorporate new knowledge, and possibly expert opinion, especially as it relates to off-label use of medical devices, which, as we know, is very, very common -- and, as well, in pharmaceuticals.

Principal also for the features of a monitoring system, we need to generate interpretable, generally graphic reports of outcomes over time and generate alerts that feed back to the providers of the information.  What has been mentioned, which I would echo, is the need for a detailed human intervention when alerts fire.  The specificity of such systems even in our best attempts is not 100 percent, and we must avoid impugning medical devices or pharmaceuticals when there may be confounding evidence to explain the problem that was identified.

The last point is feedback to the source systems in terms of benchmark information.

In Massachusetts we have such a system, based on mandatory reporting, within cardiovascular care, with 21 acute-care hospitals that provide angioplasty or cardiac surgical services.  In angioplasty, we use the American College of Cardiology dataset.  We add to that unique identifiers, unique device identifiers, and additional clinical elements to develop this Mass-DAC database registry, which is about 17,000 cases per year.  The population in Massachusetts is about 6 million.  We add to that linking the database to the Massachusetts inpatient claims database, as well as the Massachusetts vital statistics database -- basically, the death registry.

We provide rigorous data audits and adjudication of outcomes over time, which is certainly a time-intensive and resource-intensive effort.  This is used to generate, in general, quality reports that are published and public to benchmark one center against another and, soon, one physician against another, to make sure that there are no outliers in safety in terms of the performance of procedures.

The strengths of such a system:

  • All cases are included.  It’s 100 percent inclusion.  It is mandatory.
  • It is granular.  It is adjudicated.
  • There is rigorous statistical review.

However, it is limited.  It is limited by the scope.  It is really only coronary interventional procedures and bypass surgery.  But as an example of monitoring for safety, it is a useful substrate.

The dataset breadth, while broad, using the ACC-NCDR definitions for angioplasty, still leaves unmodeled covariates, which we have found on several occasions to be a likely explanation for patterns that we have detected.

The second is that the temporal availability of the published reports is limited.  That is, there is a cycle of about a year to 18 months between the time of the case being performed and the reports coming out.  The 2004 report came out in late 2006.  We seek to improve that turnaround time tremendously through an automated, electronic monitoring process.

Thinking again about the monitoring system, we had a pilot system with some design features in mind:

  • First, again, the generic data structure that is going to compare outcomes and expectations, not just for devices, but structured to be able to handle medications and quality of service.
  • Support arbitrary numbers of simultaneous prospective analyses that are configured, set it, and let it alert you when it deviates.
  • Flexible expectation development and inference methods using classical and Bayesian statistics.
  • Real-time analysis, notification of outlier results, using a flexible alerting system.
    • Multi-user, multi-platform, Web-based.

The system we call DELTA, and it is supported by an NIH RO1 grant.

These are the multitude of expectations and inference methods that we use.  In the three columns are expectations, from the very simple uniform expectation -- basically, a benchmark, and we use statistical process control methods -- through stratifying by patient risk, up to fully risk-adjusted, using either logistic regression or, for cumulative analysis, sequential probability ratio testing.  We can infer whether we are within expectations or outside of expectations using either classical or Bayesian approaches.

Again, this is going to be a little bit more methodologic in terms of examples than has been shown, but is just a representative sample of applying the informatics to the data registry as it accumulates.

This is an example output.  This is using a single center on our pilot, at Brigham and Women’s Hospital, looking at the cumulative risk of a major bleeding event following the use of a vascular closure device which is used in coronary interventional procedures widely.  What is shown along the x-axis are 36 months of cumulative outcomes data.  The system is simply plotting the cumulative risk of this major complication, which leads to prolonged length of stay, a 20 percent risk of death, and excessive expenditures.

Only out at month 29 do we start to detect an excess risk in our own population.  What is plotted in the red line that is sort of slowly creeping upward is actually the statistical power.  I want to point out that up until about month 28 the statistical power to detect an adverse event, if it was there, was only about 20 percent, even though we had accumulated more than 6,000 cases at that time.  That is because the event rates are very low and they are variable.  Month to month, they are changing up and down.

So we detected this event very late.  It turns out in root-cause analysis that this was related to a technique, not to the device.  But that was an unmodeled covariate in our dataset, cautioning us to be cautious to use our human intervention and exploration when we do alert.

We have also applied this in terms of validation to larger datasets.  This is looking at a randomized clinical trial dataset, looking at the risk of major bleeding following exposure to an oral anticoagulant.  What we were looking at here was whether one of our inference methods might have an advantage over a second inference method -- this is traditional logistic regression.  It fires after month 7 of the study.  But one of our hypotheses was whether a Bayesian updating methodology might be more data-efficient and might allow for earlier detection.

What is plotted here (not to dwell on the distributions) is simply, starting on the right-hand side, in blue, the last curve, is our original expectation, based on the bleeding risk predicted at the start of a study.

I’m sorry, this example is, in fact, just showing you how the Bayesian updating works.  This was actually in mortality following a drug-eluting stent in our population, which showed that our expectation in a higher-risk population migrated, actually, to a lower risk over time, as we get posterior distributions, comparing them to our original prior.

Going back now to our risk of major bleeding, the Bayesian approach fired one cycle earlier than our logistic approach.  Perhaps it means nothing, but as an exploration, we are interested in comparing whether one approach may have a higher sensitivity -- perhaps lower specificity -- than another.  For particular high-risk events, perhaps it is important to maximize sensitivity.

Where we are headed is the deployment of this automated safety surveillance system at those centers in Massachusetts that have electronic data capture at the point of care.  This is in an effort to do two things.  One is to reduce that cycle time from 18 months down to less, in terms of the feedback of potential outlying signals to the local institution -- Brigham and Women’s Hospital, Mass. General, et cetera -- as well as to give an early warning to the state.  There have been two occurrences in the state in terms of quality, where a particular institution has exceeded the expectations for risk-adjusted outcome.  Unfortunately, that was two years after the fact.  So there were two years of exposure at those institutions to particular providers that led to grave concerns in Massachusetts that the timing of the analysis was problematic.  Particularly complicating the situation was that in the following year the risk-adjusted outcomes were back to normal.  What you do with that information is complex.

This secure, encrypted system will provide the local institutions with the ability to do locally configured creative analyses for safety and quality monitoring, but at the state level, to aggregate the data.  Importantly -- again speaking to the control and the interest on the part of the institutions -- we need to anonymize not only the patient and the provider, but actually also the institution, because this data has not been fully vetted and adjudicated in the same way as the formal process.  But this gives that sort of early warning/first peek.

For the institutions themselves, what it provides is the feedback of the risk-adjustment methods that are being used at the state level, to give them the early warning or the advance notification of how they are going to be evaluated.

From a medical-device perspective and the limited number of pharmaceuticals that are used in these processes, we have an accumulating early-warning system that gives us a much earlier look at mandatory all-comer data coming into the state of Massachusetts in these particular domains.  Again, we are connected to the Massachusetts inpatient claims registry, as well as the Massachusetts death index, to further validate and expand our dataset.

So in summary:

  • Obviously, the detection of low-frequency medical product safety signals challenges our traditional approaches.  Everything we have talked about today has been complementary and sort of a mosaic of pieces, building to what may be a more unified, more complementary approach.
  • An idealized safety monitoring system needs to support generic data structure, prospective monitoring, dynamic feedback.
  • The Massachusetts data registry represents a high-quality prospective registry because it’s mandatory and it’s real-world.
  • We are proceeding with our ongoing evaluation of automated safety surveillance using our pilot system.

Thanks very much.

DR. SHUREN:  Questions?

MS. CRONIN:  I was wondering to what extent you are using the Partners electronic health record system to feed into what you are doing.

DR. RESNIC:  I work at Partners -- Brigham and Women’s Hospital, Mass. General.  There is a large consortium of institutions in eastern Massachusetts that are part of the Partners network.

We are actually using upstream sources.  The granularity of the data that is required, at least for this exercise, is below the resolution of what the Partners data repository is.  So we are using the catheterization laboratory systems and the departmental laboratory systems.  We integrate with the Partners electronic record system for both longitudinal follow-up, long term, for those centers that are part of that network, and for objective data elements like laboratory studies.

But again, this is a difficult nut to crack.  This is a local integration effort at a few centers in one state.  There are unique design features of each laboratory within even that network.  It is a challenge.

It is an excellent electronic medical repository, and particularly, probably, well-suited for pharmacologic review because of the computerized order entry and electronic prescription systems that are part of that system.  For what we have been doing with medical devices, it has been limited to the integration for objective data, follow-up laboratory results, admissions and discharges, that sort of thing.

MS. CRONIN:  I just know that they are unique in that in the next year they will have roughly 4,500 physicians connected, and you will be able to bring in ambulatory care.  So in terms of data triggers and making use of your early-warning system, in terms of feeding it back to clinical care, there might be some opportunities there.

DR. RESNIC:  Absolutely.

MS. CRONIN:  I was also wondering, are you working with MA-SHARE?  Do you have plans to integrate into what they are doing?

DR. RESNIC:  Not at this time.  This is an NIH ‑‑ the project itself is sort of bounded by the RO1 grant.

DR. SHUREN:  One of the things you mentioned is that when a signal is kicked out, you will have a safety analyst.  If it is getting to the point where you may issue an alert, a safety analyst is going to look at it -- resource-intensive.  Do you actually cost it out?

DR. RESNIC:  That is actually one of the specific objectives of the study, to cost it out and see what the cost-effectiveness of the presumably early-warning system will be, comparing the cost of the investigations that would be triggered, based on the sensitivity of the system and the specificity of the system, against the health-care costs incurred if one did not have a system.  That is, if you missed the retroperitoneal hemorrhage risk that we identified that was technique-based, in this case, and it was flying below your radar screen during the period of the study, how much would that cost the system?  The costs are very, very high.  Each retroperitoneal hemorrhage costs $30,000.  The savings of three, say, certainly pays for two data analysts.

DR. SHUREN:  Other questions?

[No response]

Thank you very much.

We are going to break in exactly 15 seconds for lunch.  But for those folks who are registered to speak in the afternoon, could you please just come up to the front.  Eric Mettler, who is really the person who is running today’s public meeting, wants to talk to you about logistics.

We are going to break for lunch.  We are going to get back together again at 1:15.

(Thereupon, at 12:04 p.m., the meeting was adjourned, to reconvene at 1:15 p.m., the same day.)

AFTERNOON SESSION

REAR ADMIRAL MCGINNIS:  The first speaker this afternoon is Alan Menius from GlaxoSmithKline.

 Agenda Item:  Presentation by Alan Menius

MR. MENIUS:  Thank you.  My name is Alan Menius.  I am the director of discovery analytics at GlaxoSmithKline.  I would like to thank the organizers for giving me this opportunity to describe how we are trying to use observational data a lot more in our pharmacovigilance efforts at GlaxoSmithKline.

Postmarketing pharmacovigilance is a big issue for us, as you might imagine, and we think it’s very serious.  We use a lot of different data types.  There is no doubt about it.  The greatest new things that have happened recently are these large databases.  Of course, we have tools right now that we use for spontaneous adverse events, specifically looking at the proportion of report rates and also the multi-item gamma Poisson shrinkage estimator.  These are fairly new techniques which try to take advantage of the spontaneous reporting data.

The problem is that those data have no denominator.  These are analytical methods that were derived to try to take care of some of the issues that these data have.

How can we do better?  One way is to use a different data source.  One of those data sources is observational data.  We use a lot of observational data in GlaxoSmithKline, for a myriad of reasons.  One of them now is pharmacovigilance.  One of the reasons that we do that is because you have a denominator.  You can actually figure out, with a rate, what the prevalence of a disease is, what the comorbidity rates, et cetera.  You can start looking at temporal estimates.  You can look at time series.

But the real question is -- okay, fine, but how do you move those data in such a way that you can do an analysis?  I have heard a lot of talking today about analytics.  I have seen some good efforts, actually, some use of Bayes and that sort of thing.  How do you get the dataset to the point where you can actually do this sort of analysis?

Let’s look at this slide.  This is basically an individual.  This individual, going over time -- if you imagine a y-axis -- this person receives some scripts for drug A and drug B.  There are two prescriptions done here.  What we are interested in also is, when did they not get that drug?  I call that a persistence window.  If you can figure out what that window is, then you can start these things, what we call drug eras.  This is really just a manner of time that the person was actually on a certain drug.  This is important, because this tells you when the person is actually on the drug.

Likewise, if you have a couple of conditions that come up in medical records, we can have persistence windows here.  So a patient goes to their doctor and they prescribe, using ICD-9 code or a MedDRA term -- “you have this, you have this, you have this.”  You want to know, are they still diagnosed with those things over time also?  If they are, then we have a condition era.

Now, if we want to understand drug-event combinations, which is how we get ratios for adverse events, we then just marry them up.

What is nice about this -- if you look to the far right, you can see that there is one condition, CE3, which is occurring when there isn’t a drug.  That gives you an underlying rate of a condition.

So we see this as a way forward, another data source and another way of really understanding what is happening in the real world.

Our future vision is that we will continue to use and develop methods using observational data sources.  It’s not whether we will; we are doing it and we are developing methods.  We are using ontologies now to link disparate data sources.  We do not depend on a single observational data source.  We think it’s better, because of biases in datasets, to look at several data sources at the same time, asking the same question, but putting the data in a way so that you are actually asking the same question of those data.

Finally, we do utilize other types of data in our development process.  Janet, I think, spoke very early this morning about using pharmacogenetics, metabolomics, other types of biomarkers, using classifiers and prognostics, to try to give us an understanding of what we are going to expect once we actually get the drug to market.

Finally, just to show you that we have started doing this, here is an example using an observational data source, where we went ahead and computed for a certain antibiotic how it is actually increasing hypoglycemia.  This is just comparing over time prevalence of hypoglycemia rates per 10,000 patients versus other types of antibiotics.

With that, I think I am done.  Thanks.

REAR ADMIRAL MCGINNIS:  Thank you, Alan.  Questions for Alan?

Let me ask you one question, Alan.  Noncompliance came to mind when I saw those gaps in your chart.  Is this tool useful for tracking compliance?

MR. MENIUS:  We haven’t tried to use it for compliance yet.  You are right that it’s a symptom.  It kind of makes you wonder.  But is it good at actually identifying compliance issues?  Probably not.

Any other questions?

[No response]

Our next speaker is Alexander Ruggieri from Cerner Corporations, Cerner Galt Division.

 Agenda Item:  Presentation by Alexander Ruggieri

DR. RUGGIERI:  Thank you very much.  I’m Alex Ruggieri, and I’m happy to represent Cerner here.

These discussions, hopefully, will lead to a stronger centerpiece for safety surveillance for medical products.  Cerner has been in the industry of laying the information tracks for about two decades, and we are anxious to be partners in assisting you in whichever directions these discussions go.

Cerner Galt is a division of Cerner that deals with the discovery, management, and conveyance of risk associated with drugs, biologics, devices, medical products, in a way that is scientifically sound, credible, and compliant.  We do this on behalf of our clients.

What I want to do for you today is give our perceptions, mostly driven by the announcement for this meeting, of what the gaps are, what the obstacles are, and what the issues are, which basically represent forks in the road, which will hopefully stimulate more discussion.

I think most, if not all, the gaps -- at least a significant problem with the gaps -- relate to data quality.  That is where this all begins.  The data has to be relevant.  There has to be sufficient quantity to support power.  The data has to be consistent.  That is where we get down to data definitions, standards, vocabularies, ontologies.  Those infrastructures still represent significant gaps.  The data has to be comprehensive.

A lot of the talks we heard today were highly focused presentations -- very nicely designed systems, but generally focused in one particular therapeutic area, one particular product.

First of all, it’s important to note that the Sentinel Network shares obstacles for EHR deployment.  All of the obstacles facing the deployment of electronic health records in this country will impact these sentinel networks.  We believe that privacy and workflow issues should not, in the technical sense, be obstacles to the Sentinel Network.  They may be cultural obstacles, but they should not be technical ones.

I think it is incorrect to look for any dependence on health-care providers for drug safety data.  We don’t believe this is realistic.  The physician is too busy.  That is why the adverse-event reporting system generally only captures a very tiny percentage of adverse events.

I think an important question that needs to be asked, which I don’t think has been clarified yet, is, what do we mean by drug safety?  Does that mean recognizing a looming disaster?  Is risk management going to be part of drug safety, when we understand what risks are with drugs or therapeutic products?  Are we going to be learning more about risk/benefit ratio?  This raises the possibility of using data as virtual registries, virtual clinical trials.

I have heard very little about regulatory barriers.  I am sure that is probably another conference.  But all of these discussions -- the term “it will take an act of Congress” is quite literal here.  So I see a number of barriers that need to be addressed from the regulatory standpoint.

Finally, what will be the impact on communication and conveyance?  How will outputs of these types of systems affect labeling?

What do you want the system to do?  Is it just going to be mingling of data?  Is it going to involve all the warehousing capabilities of cleaning, mapping, aggregating, algorithms?  Then what?  Will industry have a chance for rebuttal?

What about comparative product safety?  That eventually will be a river you are going to have to cross.

What about communication to providers?  When do you have an obligation to communicate findings from some of these systems to providers?

There is a lot of discussion about linking.  It is important to decide what it is you want to link.  Do you want to aggregate all data together, put it in one big pot?  Are we talking about all data or data subsets?  What about many of the research databases, some of which we heard about this morning, which exist in a lot of academic institutions, some of them not known by data managers within the institutions?

The other thing that I think needs to be addressed is the AERs culture, which has been the centerpiece, at least from the industry perspective, of drug safety.  There, unfortunately, has developed this AERs mentality, which is a this-is-the-best-we-have mentality, riddled with this regulatory semantic where you can have a serious skin rash, but you can have non-serious breast cancer related to a therapeutic product.

MedDRA is not an acceptable terminology.  The AERs or AERs II, however that will be conveyed, should not overshadow or compete with the vision of the Sentinel Network.

We believe there is low-hanging fruit.  Many systems exist right now and are capable of doing this.  We think there are varsity players available right now, and the varsity players should not be hindered from proceeding in order to let junior varsity players catch up.

Biosurveillance was mentioned.  There is metadata that is available.  A biosurveillance metadata system has been mentioned.  I don’t think a biosurveillance system goes all the way, however.  But it definitely is a starting point.  We would recommend to think high-level first.

Finally, it must identify a mutually beneficial partnership role with all the stakeholders.  We don’t believe that industry should be left out of this discussion.  This will imply that there will need to be a very strong brokering and governance layer, and agreement of all to abide.  People that have these data sources, which are very rich, could potentially be tempted to pursue possible proprietary ventures with them.

This is my last slide.  This is a data model, very high-level domain data model, for what we call a safety-relevant event.  It feeds from procedures, conditions, condition milestones, substance administration, outcomes, and services.

Thank you very much.  I would be happy to answer any questions.

REAR ADMIRAL MCGINNIS:  Thank you, Alex.  Questions from our panel?

So your recommendation, Alex, was to really keep the private-sector databases up and running as the varsity players and then everybody that is coming behind -- maybe keep that separate, use one as a benchmark, maybe, for the other?

DR. RUGGIERI:  I think the systems that are functional now could be used for proof of concept, could be used for guidance for systems that are up and coming.  I also think, at the end, when there are a lot of players -- I think there is a threat of regression to the mean when you mingle too much data from too many data sources, where certain pockets of data may be more epidemiologically sound than other pockets.

REAR ADMIRAL MCGINNIS:  Or we may have even the quirks that we heard about this morning.  You are following these diabetic patients; all of a sudden there is no drug there.  That’s because the drug isn’t a pharmacy benefit.  The device is in another part of the coverage, so it’s not showing up here.  So you would be aggregating data with different quirks in it.  Unless somebody understands those databases, you could come out with a wrong outcome.

DR. RUGGIERI:  That is certainly true.

DR. BRAUN:  Please elaborate further on the problems with MedDRA.

DR. RUGGIERI:  First of all, I have to say that I come from a terminology shop, by training.  So I had to think very strongly and very rigorously about what the function of terminologies and dictionaries is in computational systems.

The issues with MedDRA -- I think if you take Jim Casino’s [phonetic] laundry list of must-haves for a clinical terminology and you compare them against MedDRA, I think MedDRA falls very short.  MedDRA is not comprehensive.  MedDRA does not support qualifiers.  MedDRA is based on an anatomic ontology, which I don’t think supports medical expression that you are going to find in electronic health records or clinical data warehouses.

I don’t think MedDRA has the infrastructure nor the methodology in place to behave like a serious player for a machinable terminology -- issues of maintenance, issues of concept retirement.  There is also an enormous lack of qualifiers.

Those are just some of the items.  Generally, it has been inadequate.  As a safety officer in a large biotech company, as well as in my current role, I daily come across issues with MedDRA.

REAR ADMIRAL MCGINNIS:  Any other questions from the panel?

[No response]

Thank you, Alex.

Our next speaker is Dwight Reynolds from the Heart Rhythm Society.

 Agenda Item:  Presentation by Dwight Reynolds

DR. REYNOLDS:  Good afternoon.  Thank you for providing the Heart Rhythm Society with the opportunity to present comments regarding the establishment of a national Sentinel Network for postmarket surveillance for medical devices.  My name is Dwight Reynolds.  I am president of the Heart Rhythm Society.  I am also the chief of cardiology at the University of Oklahoma.

The Heart Rhythm Society is the international leader in science education and advocacy for cardiac arrhythmia professionals and patients, and the primary information resource on heart-rhythm disorders.  As a practicing heart-rhythm specialist for over 25 years, I have diagnosed and treated thousands of patients with heart-rhythm disorders, many of them life-threatening.  I have done this with pacemakers and implantable defibrillators, and cardiac resynchronization devices of late.

I know firsthand how important it is for the United States to establish a comprehensive Sentinel Network of easily accessible and reliable postmarket medical-device surveillance information which can be used by the FDA, as well as by practicing physicians caring for patients, to enhance patient safety and care.

For background, less than a couple of years ago, there was a crisis in the patient and physician community regarding the ability of the current postmarket surveillance system to react to information on implanted cardiac devices and to communicate the essential data to physicians and patients in a timely manner.  In 2005, recalls and advisories issued by the three largest pacemaker and defibrillator manufacturers and the untimely death of a patient with a device malfunction led the Heart Rhythm Society to focus attention on the postmarket system and the critical need for reform.

In September 2005, the Heart Rhythm Society convened a policy conference, cosponsored with the FDA, of 300 experts in industry, the law, the physician community, risk communication, and patients to explore improvements for postmarket surveillance of pacemakers and ICDs.  As a result of that meeting, the society assembled a task force of leading cardiac care providers and experts charged with the development of recommendations to address concerns raised at the conference.

In September of this past year, 2006, the Heart Rhythm Society published these recommendations to improve the postmarket surveillance system for implanted devices.

The reason we are here today is to share these recommendations with a wider audience and to ultimately improve the nation’s postmarket surveillance system for implanted cardiac devices.  The recommendations are the first such collaboration among these diverse groups and have been officially endorsed by the American College of Cardiology and the American Heart Association.

Due to the limited time we have here today, I am going to focus on three major recommendations:

  • The first is the need to improve the current postmarket surveillance network, with an emphasis on highlighting the current limitations in the MAUDE database.
  • Number two is the important role of remote monitoring on implanted cardiac devices and its potential for inclusion in the Sentinel Network.
  • Third, the potential of the current ICD registry, a partnership of the American College of Cardiology and the Heart Rhythm Society, which could facilitate the collection of postmarket surveillance data, enrich the Sentinel Network with the highest-quality data, and improve postmarket surveillance for implanted cardiac devices.

The MAUDE database problems are, in significant measure:

  • Event reports are cryptic and incomplete.
  • It is often difficult to determine if a true device malfunction or patient injury has occurred.
  • Both poor organization and retrieval tools within MAUDE frustrate the user’s ability to retrieve useful information.

The Heart Rhythm Society recommends that the FDA begin efforts to design and implement a more robust reporting system for observed device malfunctions that could overcome many of these MAUDE shortcomings and strengthen the voluntary reporting system.

Although we support the creation of a Sentinel Network, risk-adjusted analysis of any system is going to be absolutely necessary.  These are complex devices placed in patients with diverse disease severities and often experiencing life-threatening situations.  Since there is an overlap between the severity of illness and the complexity of device function, the Sentinel Network needs to cautiously be interpreted.

Just a statement about remote monitoring, for now and in the future.  Medical device companies are increasingly developing these technologies that allow for monitoring of patients with pacemakers and ICDs outside of the physician office or in other health-care settings.  Through the use of proprietary monitors and transmitters, patients, even today, are able to send device and cardiac function data from their home or someplace else to a remote station using radiofrequency and transtelephonic technology.  Remote monitoring systems have the capacity to be used for data collection and risk identification and analysis in the development of the Sentinel Network for postmarket safety.

In this regard, the Heart Rhythm Society recommends that cardiac rhythm-management device manufacturers continue to develop, and that FDA consider utilizing, remote monitoring technologies when establishing the Sentinel Network to identify abnormal device behavior as early as possible and to automatically and accurately determine the performance status of certain implanted device functions, thereby decreasing the reliance on passive reporting.

I am only going to make a comment about the ICD registry in passing.  The ICD registry is a joint effort between the American College of Cardiology and the Heart Rhythm Society.  It has the potential to be an important tool in postmarket surveillance.  It certainly has to be expanded, and there is much attention being given to that particular issue right now.  But it certainly does have the potential for helping in this regard.

I will stop there.  I will be happy to answer any questions that you might have.

REAR ADMIRAL MCGINNIS:  Thank you, Dwight.  Questions from the panel?

DR. GROSS:  I am just curious about your mention of remote monitoring and just your thoughts about how the society and its members could participate in that effort.

DR. REYNOLDS:  The Heart Rhythm Society in particular, in its review of the device-performance problems of a couple of years ago, determined -- with, I think, agreement by all parties, including the FDA -- that the problem of numerators and denominators in looking at device-performance problems cannot be overcome with things such as the MAUDE database.  We think that this remote monitoring capability will flesh out both the numerator for device problems and the real denominator, so that we will have the ability to really address the issues of risk comparatively.

The society’s role in this will simply be to help modulate it as it evolves and to try to bring some sense to the multiple manufacturers that are involved in it.

REAR ADMIRAL MCGINNIS:  Any other questions?

[No response]

Thank you very much.

Our next speaker is Stephen Goldman from Stephen Goldman Consulting Services.

 Agenda Item:  Presentation by Stephen Goldman

DR. GOLDMAN:  I’m Dr. Steve Goldman.  I am an independent international consultant in medical products safety.  I was the original medical director of the MedWatch program.

I am going to actually talk about something that has been given rather short shrift so far today, and that is data quality.  Nothing is going to make a difference with the technologies if you haven’t got good-quality data.  No one has really talked about that, so I am going to talk about that.

We know that there are problems with spontaneous reports.  But thee is also a problem with the acceptance of the clinical community with the sentinel systems and other things.  With all due respect to Dr. Ruggieri, I am stunned to think that advocation would be made that the clinician would be cut out and would not be important to the process of sentinel systems.  I frankly think the opposite.

First of all, Abraham Lincoln said in 1862, “You cannot escape history.”  We should not try to escape history.  History teaches us from one decade to the next that programs that educate health-care practitioners to recognize, report, and provide quality adverse-event reports work, period.  The problem we have run into is not that they don’t work; it is that they don’t get funded adequately.  They stop.  I do mention that the latest program was 2006.  These programs work.

Secondly, the effect of MedWatch:  In the initial few years of MedWatch, Toni Piazza-Hepp and D. Kennedy analyzed the reports that came in through the voluntary program.  There was clearly an increase both in the proportion of serious direct adverse-event reports coming into the agency and the quality of reports from pharmacists, which approached that of physicians.

Secondly, we ran a whole series of conferences and CME articles for docs and pharmacists and others.  They were extremely well received.  They asked for more of them.  The conference we ran -- there is clear interest in the clinical community, when it is presented in a certain way.

How do you do that?  You present it in a clinical setting.  For example, an ADR committee was formed that tied into the work of the pharmacy and therapeutics committee.  There was regular feedback.

I don’t see Jeff Shuren here this afternoon.  He asked about feedback.  Feedback is critical.  If people are going to take the time to fill out a report, they want feedback.  With MedWatch, you got a letter when you sent something in.  We got constant calls about “what happened to my report.”

Secondly, in Australia, Frank May’s group, the exquisite work done in academic detailing -- study after study shows that good programs in academic detailing will change prescribing behavior, will change health-care behavior, in a positive method -- again, a sustained educational program delivered at the point of care.

I will make the point that none of the enhancements -- data mining, whatever terminology you use, any of the computerized systems, as good as they are -- if the quality of data is poor, none of that matters.  You have to start with the quality of the data.  You have to start with the clinicians making the assessments.  You have to start with the people entering the data and then utilizing that.

Secondly, the concept of directed questioning is how you get better-quality reports.  We have been talking about an off-service-note format.  Those of us who have treated patients know when you leave a service, you write a complete summary of what has gone on with that patient.  It’s extremely helpful.

Secondly, if you have the known bad actors when it comes to adverse events -- and this is what the “Always Expected Report” list is all about -- these are notoriously drug-related.  We know that these are the ones that cause significant morbidity and mortality.  They are also idiosyncratic events.

What should we do about them?  There are questionnaires that have been developed for some of them, but there is not one organization that owns them.  There is not one group a drug company can go to and say, “I’m looking for torsade.  What questions do I ask?”

David Graham is in the audience.  David and I served on the phenformin task force at the FDA.  We put together a series of questions to be asked when people called into MedWatch and also into the company.  Lo and behold, the quality of the reports got better.

Clearly, this is a need.  No one has ever taken charge of doing this.  I’m the North American chair of the DIA’s clinical safety and pharmacovigilance special interest area community, we started a task force on that.  But, frankly, this is a national/international initiative that should be undertaken.

Why?  Because these are the kinds of studies you can do.  This is Kaufman’s group that took a look at the first 31 putative cases of aplastic anemia with felbamate.  They got bone marrow biopsies, other clinical data, and they were able to confirm what was seen with the adverse-event reports.

When you ask about point of care, here is an example of what was done in Belgium.  Literally, available to the docs, they got an email about following the IDSA guidelines about catheter infections -- a very simple intervention.  The compliance went from 44 percent to 15 percent.  You asked about noncompliance, Tom.  There is a perfect example of that.

The second one:  This was in France.  This is where treatment guidelines were easily usable for the oncology group.  Take a look at what happened in the improvement about decision making when there was a tool available to people at point of care and point of contact, a trusted tool.

What are the lessons learned?  If we are going to improve the recognition, reporting, and analysis of adverse events, it needs to be done at the point of care, and I truly believe it needs to be done by other health-care professionals, because health-care professionals tend to trust other health-care professionals.

Secondly, if you link them to other important activities, such as quality assurance, accreditation by the Joint Commission, P&T committees, you get buy-in from the clinicians and other folks who are working at the health-care settings, particularly hospitals.

Thirdly, the data clearly indicates that if you make the collection of postmarketing safety data clinically relevant, health-care professionals are very receptive.

Secondly, if you provide clinically relevant feedback as a result of that data collection and analysis, they are also extremely receptive.  Why wouldn’t they be?  It affects patient care.

Thirdly, if you have clinically relevant treatment advice, including safety advice, and you provide it in a user-friendly format at point of care, health-care professionals are very receptive.

In summary, there is a lot of good data to suggest that the clinical community is receptive to this type of information.  They must be part of the equation, because they are the ones seeing the patients, they are the ones entering the data, and we have to improve their knowledge of how this data is being utilized.  It has to be an ongoing, sustained process.  Frankly, there has to be money involved, because these programs cost money.

Thank you.  I will take any questions.

DR. SHUREN:  Thank you, Steve.

My apologies to the past speakers and Dr. Goldman for my having to step out and miss their presentations.

With that, let me ask the panelists if folks have any questions.

[No response]

You answered everything.

DR. GOLDMAN:  I presume he wants to ask a question.  Does he get a chance to ask a question?

DR. RUGGIERI:  I think you may have misunderstood me.  I apologize.  I was speaking very quickly.  In no way do I think health-care practitioners should be left out of the, quote/unquote, equation.  Health-care practitioners are involved and have a lot of their day absorbed in capturing the essential data that we need to discover information about risk associated with products.

What health-care providers should not be burdened with is to step out of their exam rooms and away from the patients and have to worry about filling out a burdensome form or a burdensome set of information to relay that information.  Plus, one health-care provider’s hypothesis about a relationship of a particular event with a drug is exactly that; it’s a hypothesis.

I think what this system is that we are discussing here today -- I am not saying that there should not be a mechanism for physicians to call to attention of authorities their concerns about a particular product.  There should be an AERs system.  I am not asking that that be replaced.  But I do think we need something broader.  I think we need to be able to keep our ear to the door and watch not, quote/unquote, adverse events, but watch all events.  I think the knowledge about risk is going to come out of watching patterns, not individual events.

DR. GOLDMAN:  May I say something?  Thank you for the clarification.  Let me make two points.

Given that the meat is going to be infrequent events that are lost in the noise, we might need clinical trials.  That’s one thing.

Secondly, the bad actors, the “Always Expedited Reports,” as you know, many of them are spectrum disorders.  Therefore, what you said about MedDRA only works to a point.  It’s only a terminology.

Take the dermatitides -- toxic epidermal necrolysis, erythema multiforme, Stevens-Johnson syndrome.  Someone may report one as a diagnosis versus another.  You have to look at the cases.  You have to have better cases being reported.

I still would advocate for having standardized query tools, recognized internationally, because these things are seen internationally, to engender the kind of data you are talking about for the larger databases.

So thank you for the clarification.  I presume you would agree with what I am saying on that -- or maybe not.

DR. SHUREN:  Thank you.

Next, Michael Lieberman from GE Healthcare, Practice Solutions.

 Agenda Item:  Presentation by Michael Lieberman

DR. LIEBERMAN:  I want to talk today about the Medical Quality Improvement Consortium, which is a group of GE Healthcare users, as an example of data aggregation across the country, with clinical data.

This slide just shows that we do get information from across the country.  This data is limited just to our EMR.  That has both good and bad aspects to it.

This consortium has actually been around for about five years now.  It really was brought about, in large part, by and for our users, who saw the value in the data that they were collecting and they wanted to get it out of their individual silos and share it among the consortium, really to do a couple of things:  To help with improving clinical quality at their own level, at the larger level, and also to make the data available for research in general.  As we heard earlier from Dr. Hill, we also make our data available in commercial ventures to help pay for this whole system.  So it is not only a free service for our users, it is better-than-free service for them in that we share some of that revenue with them.

I won’t spend too much time on this.  It is the same sort of thing, where we have an extract system at each site.  It sucks data out of their EMR and comes into our database.  We can then clean it up and produce quality reporting and data for our research.

The key here is really getting the users comfortable with giving up some of the control of their data.  We feel that they really still own the data.  However, they are obviously giving up some control of it, in that we are aggregating it at our site.  We really had to go very far to assure them that the risk to them was low.  We had to be HIPAA-compliant, and really even more than HIPAA-compliant.  They are business partners with us, but we made the decision to try to have as little protected health information at our site as possible.  That made them feel better, as well.

So we remove all of the usual patient-identifying information, strip out name, Social Security number, phone, address.  You can see the rest there that we do.  At our site, it is all de-identified and stripped of protected health information.  They are able to re-identify the patients at their sites.  If they are running a quality report on our site and want to dive in and see which patients are not meeting their A1C goals, they can actually look up those patients on their own systems.

The type of information we have is the usual types of clinical-list items:

  • • Problem lists with ICD-9 codes.  We have more granular, kind of proprietary vocabulary, but we found that most people are comfortable with ICD-9s and are using ICD-9s.  We would like to get to SNOMED at some point as well.
    • Medication data.  This is all mapped to a commercial drug database vendor, with their coding system, as well.  That is coded information.
    • Flow sheets.  We have the usual vital signs, lab results, and even some social history types of things, like smoking status.  So there is a richness there.
    • As I mentioned, the demographics that we do have.

The thing that we don’t have right now is, we don’t bring the document information to our database.  Again, that was due to concerns about protected health information.  We don’t even bring that in.
Just some quick statistics.  We have over 100 members.  A member can be anything from a one-doc office to a large academic medical center.  We have over 6 million patients -- I think it’s getting closer to 7 million now -- 6,000 physicians, mostly primary care, but we do have specialty information as well.

Again, we have had this in place for about five years.  I think somebody else’s slide showed that the average length in the database is at least 22 months.  With a clinical dataset, that is actually a little harder to define.  Due to time constraints, I won’t go into that.

Just some of the considerations about this:

  • There is always the question of representativeness of the data.  We do have certain regions of the country where we are more highly penetrated and where people seem to be more interested in participation in this consortium.  They hear about it from their friends.  So it is not necessarily representative geographically across the country.  But in terms of insurance types, we do very well.  We have people without insurance.  We have Medicaid, Medicare, and private insurance.  It is really just what you would see throughout various doctors’ practice panels in terms of insurance types.
  • The ability to combine with other datasets.  Again, because of our interest in de-identifying the data, we currently are not able to combine this with other claims types of data.  I think when you talk about risk analysis, the other part of it is benefit analysis.  In order to measure benefit, you probably would like a more complete dataset to try to demonstrate some decreased costs of care overall associated with the patient.  I think that is something that we are definitely interested in doing, but there are some technical issues there at the moment.
  • The last point is the ability to gather additional information.  We talked about a Sentinel Network.  If you see a signal, you then have to go back and investigate it.  So it is key to be able to go back and look retrospectively, more in depth, at the data.

We do have the links in place to do that.  We really haven’t done that yet.  We haven’t put this into place.  But we do have the ability to go back to physicians and have the physicians go back to the patient, if necessary.

Prospectively is the other issue.  It is surveillance in that if you are releasing a new product to market and you are concerned about a particular aspect of it that is not routinely collected in routine clinical care, you might want the ability to ask those additional questions that would make your dataset more complete.  We do have the ability to do that as well, although that is more difficult.

Any questions?

DR. SHUREN:  Questions?

[No response]

Thank you.

Next, Adrian Thomas from Johnson & Johnson.

 Agenda Item:  Presentation by Adrian Thomas

DR. THOMAS:  Thank you very much.  I am very pleased to be here to make a different sort of presentation on behalf of industry.

I am the chief safety officer for J&J and have responsibility for the pharm, biopharm and much of the consumer sector today and some devices.

As we look at the concept of sentinel sites and sort of drug regulations, from a broad health-care developer and deliverer point of view, we are thinking beyond just that scope and the complexity involved in looking at supporting the range of our products.

We all know about the inherent tensions in delivering novel therapies.  There is always going to be that tension between getting something out early to patients who need it and minimizing risks, particularly unknown risks or unexpected risks.

The other tension is that the data that we use to generate approvals -- that 25 percent response rate in someone with pancreatic cancer.  Clearly, 75 percent didn’t respond.  When we go and treat a patient in public practice or in private practice, we want to understand how those data relate to the person we are seeing in the real world:  What is the medical relevance of a trial population to the population we are treating?  What is the relevance of the data we have generated, which might be placebo-controlled, to pricing and payer decisions?  What is the relevance of it to unknown risks?

Clearly, this is all leading to what we are talking about today, which is that we do need additional quality longitudinal data from real-world populations to help answer some of these broad issues.

It is clear that these data are not just for the industry or regulators or payers or patients or physician groups.  It is for all those stakeholders together.  At the end of the day, we need to have a transparent approach to how these data are generated, to make sure that they are actually received and acknowledged and utilized in a way that is actually beneficial at the end, which takes me to some pragmatic considerations that I want to focus on today.

As we talk about a consortium approach in the Sentinel Network, I guess the question I would ask, which I challenge people to think about, is, what is an appropriate role for industry -- not just our industry, but the technology industries that develop EMRs, the industries that develop data-mining methodologies?  What is the appropriate role?  In today’s environment, I think it would be fair to say that there are questions about the role of these industries with respect to delivery of health-care products, patients, their relationships with physicians and regulators that need to be answered.  So I think there is an ethical standard here that we have to answer.

The second one is around privacy and ownership.  As a person who works for a broad-based health-care company, I believe firmly it is our company’s responsibility to know more and to try to understand more about the safety of our products than anyone else, because we sit, in fact, on the Global Development Program.  We sit on information coming from inside the U.S., outside the U.S., preclinical, clinical, postmarketing.  It’s our responsibility to understand those and to communicate those as well -- **Tape Change 4a to 4b** -- closer and closer to the patient and the individual record, that needs to be quite clear, because it’s not the industry, obviously.

I want to touch on quality versus quantity.  I fully believe that the physician needs to be fully engaged in understanding the responsibility with pharmacovigilance and outcomes data.  We don’t need more information.  Since I joined this company about six years ago, we have gone from 60,000 safety reports to 200,000 safety reports.  If you model that out for five years, we will get up to 400,000 safety reports.  That doesn’t include 800,000 device failures and so on.

We don’t need more data points.  We need more information of high quality.  We have to have the physician, as the primary provider, whether it’s at the call center level or filling in an EMR or responding to a registry survey, engaged in giving high-quality information.  Otherwise, we are simply seeing a lot of data points which will probably be empty.

How are we going to use this information?  The brave new world of data mining is exciting.  I think it is the way that we have to go.  But I ask the question:  How will lawyers use it?  How will the public perceive it?  How do we know what strength to apply to this sort of data, which, frankly, is not being collected, because it’s primarily safety information.  We are making assumptions about claims data.  We are making assumptions about clinical data on electronic medical records that may lead to safety outcomes.  What are the standards we are going to put around that?  How should people interpret it?

Finally, I fully support the open-source approach.  I think at the end of the day, in my safety database, I have to achieve this, which is, from within J&J, over 140 companies and organizations feeding me data, let alone the outside agencies and partners that we deal with.  We have to have an open-source approach to safety data, so that there are some common exportable elements that everyone understands.  Building proprietary systems means we are locked into looking in certain isolated spots for safety data.  The rarer an event becomes, the more broadly we have to look.  If we need to look at special populations, they may not exist in one sentinel site.  We have to be able to cast our net more widely.  So I make a plea for interoperability and common standards.

Thank you.

DR. SHUREN:  Let me ask, one of the things you raised is that we don’t need more data points, but we need high-quality data.  Any recommendations as to how we incentivize folks to provide high-quality data?

DR. THOMAS:  I do have a few ideas.  You can tell I am not North American by origin.  If you look at the impact, for example, in the U.K. of directly linking physicians’ payment -- sort of a pay-for-performance type of measure -- to quality outcome indicators, including drug safety, where they can, in fact, incentivize as much as 20 to 30 percent of their income, they spent that money and more.  The quality of data flowing into the MHOA, the U.K. statutory authority, really improved.

I think we have to understand that if we ask a physician to move away from their clinical setting and send a piece of paper off on their fax, we are not going to get people engaged.  But if we can actually incentivize them in a way that doesn’t disrupt their clinical flow, that leads to a value-add -- for example, if they are in an EMR, why isn’t the form in the EMR?  Why should they walk away and fax off a MedWatch to FDA, who then have to process it by hand and employ someone to look at it?

We have to make these things simple.  The carrots are there.  We use pay-for-performance measures for lots of other reasons.  I simply challenge people to think, why not for this?

DR. SHUREN:  One other question you put out there is the role for industry.  There certainly is sensitivity involved with any data, particularly as it may be data that is put out there by government or for regulatory purposes, as to how close industry may be to those activities.

I know you can’t speak on behalf of the drug industry, and you may not feel comfortable speaking on behalf of J&J.  So I will leave it open to you.  But is there interest from the standpoint of either J&J or the industry in providing funds to support the activities we have been talking about today, but to stand back away from the data, the use of it?

DR. THOMAS:  I do feel comfortable speaking on behalf of J&J, but not of the industry.

I think my answer is, yes, if it makes sense to the science or the product.  What we would like to see, though, is that, in some way, we can extract that de-identified data and add it to our total experience, so that, as we look at the overall development profile of our product, particularly if we are trying to mix what is here in the U.S. with what is in Asia, what is in Europe, we can build a safety profile, or even an outcomes profile, around our product.

Being able to pull data is very important, but staying away from the proprietary nature of both privacy and intellectual property, clearly.

DR. SHUREN:  Thank you.

Next I would like to call John Rothman, Opt-e-scrip.

 Agenda Item:  Presentation by John Rothman

DR. ROTHMAN:  Thank you.

I am here to present to you today a collaboration between Opt-e-scrip and the American Academy of Family Practice.  The first half of this presentation was supposed to be given by Wilson Pace.  Wilson is not here today.  Actually, two other people were supposed to give this presentation, and they both called me at the end of last week, and I have both of their slides.  So I have to do the best I can here.

This is a collaboration based around a novel way of prescribing chronic-care drugs developed by Opt-e-scrip and currently being implemented by the American Academy of Family Practice.  The AAFP is a very large and successful practice-based research network.  It has a large number of physicians, in virtually every state.  They have a core staff of 15 people.  They have numerous peripheral networks accessory to their central network.  They are actively involved every day in practice-based research.  They have numerous protocols going on.  Opt-e-scrip is working with them currently in the development of this chronic-care methodology.

I have more slides than time, so I am going to skip through quickly.

The mission of the American Academy, as it says here, is to conduct, support, and promote research.  They see a contribution that they can make in the arena of postmarketing surveillance.

They are very, very electronically literate.  They have led the way in a number of initiatives, large numbers of electronic health record systems within the academy, a lot of standardization.  They are members of ePCRN, which comes out of NIH, the roadmap for improving clinical practice.  If you are not familiar with this, this is an NIH-funded initiative to network medical networks.  AAFP has taken a lead in this.

As it says here, they can identify patients, track patients, guide protocols, et cetera.

The system exists for a number of purposes within practice-based family medicine.  In looking at the system, the way it is constructed, the way it can potentially be used, it has application in postmarketing surveillance, in Phase 4 trials.  That is what we are moving toward now.  That is why I am here to present this to you today.

Which takes me to the tool that they are using, which is the Opt-e-scrip method.  The Opt-e-scrip method is an N-of-1 trial, in which patients get two drugs in a prescription, in a blinded, crossover fashion.  They take both of these drugs.  While they know what is in the kit, they don’t know what they are taking on any given day.

Based upon the elimination half-life, a portion of the data is not analyzed.  It’s a washout period, as patients switch between drugs.  In this within-patient, crossover, blinded design, side effects and efficacy measures are collected.  It’s very abbreviated.  Roughly a half-dozen side-effect measures are rated on a five-point scale, roughly the same number of efficacy measures, in the same way.  It takes 15 seconds to fill out a day’s worth of data.  It is very abbreviated, but it is very effective.

When aggregated, these N-of-1 trials are extremely powerful.  Maybe I will have time to get into that later.

These tests are all validated.  They are single-patient, double-blind, randomized, repeated crossovers.

The power of this trial -- this is statistical heresy.  A lot of people haven’t been doing this, even though the trials have been going on now for 30 or 40 years in medicine.  Instead of using power calculations based on patient, they are based on crossover intervals.  You can get very powerful, very quickly, within a patient.  Aggregating them gives you the ability to do kind of Bayesian hierarchical evaluations.

Currently, this method is being looked at by AAFP as the standard of care.  We are assessing it right now in a 600-patient trial with AAFP.  Other trials have been written and other trials are going to be fielded soon.

There is a lot of literature on this if you care to investigate it.  The American Medical Association came out in favor of designs like this.

These are the kits.  The drugs are blister-packed.  You can see that they are blinded.

This is the regimen in which patients are given different drugs on different days, in multiple-crossover design.  Again, that is the kit.

This the data-collection form.  This is a week’s worth of data.  Two sides of that one form are a week’s worth of data.  So it is very abbreviated, very quick to collect, but it does hit the major points on safety and efficacy, and it does give you a profile for the patient and their ability to both respond to and tolerate a given drug -- for example, an H2-blocker versus a PPI.

This is the cover page of the report.  It tells you some of the statistics that are used.  There are some of the statistical comparisons between the two drugs within a given patient.

As I said, this method is being used currently by AAFP.

That is my presentation.

DR. SHUREN:  Questions?

DR. TRONTELL:  Just looking at your slides, it would suggest you are looking largely at outcomes that are symptomatic, where the patients can report.  Is that a fair assumption?

DR. ROTHMAN:  Yes, that is a fair assumption.  This is intentionally designed to be very real-world.  It is taken in the context of which drug is optimal for any given patient in the universe of potentially optimal drugs, and to narrow that choice down based upon evidence.  This is an evidence-based way of prescribing specific medications.

DR. TRONTELL:  How would you see this being applied to other instances where, perhaps, you may not have such ready measures of effectiveness, where it might involve physician monitoring, laboratory monitoring, et cetera?

DR. ROTHMAN:  There is a variety of applications, and we are beginning to explore a number of them now.  One of them, for example, is in the area of ADHD, where one of the characteristics of the disease may, in fact, be a response to the drug overall.  It can be used almost diagnostically in that setting.

So we are looking at a variety of things.

I think the key difference that I would like to share with you is that, unlike the classic way in which parametric statistics would use large numbers of patients to draw a curve and then infer that all patients fall below the curve, not knowing the position of any given patient, in this particular model we know where every patient is, and with a large enough number of them, we can infer the curve.

DR. SHUREN:  Thank you.

Next, Jonathan Seltzer, Applied Clinical Intelligence and American Pharmacists Association Collaboration.

 Agenda Item:  Presentation by Jonathan Seltzer and Ben Bluml

DR. SELTZER:  Thank you very much.

This is a collaboration between Applied Clinical Intelligence and the American Pharmacists Association.  Both of us actually did show up and bring our slides.

I want to thank the FDA for having us here.  Along with Ben Bluml, who is standing in back of me, from the American Pharmacists Association, we are here to share with you our work on the creation of a pharmacy-based postmarketing safety surveillance network.  I will give you a high-level view of the network.  I will tell you about the players.  Ben will give you a virtual tour of the pharmacy setting.  We will wrap up with a word about the technology that links it together.

Opinion polls reveal that pharmacists are the most trusted of health-care professionals.  Although I am a cardiologist, there are many other health-care professionals that are not physicians.  Additionally, pharmacists are the most frequent point of contact with the health-care system foremost Americans.  We have taken advantage of this fact in creating an infrastructure for a pharmacy-based clinical-trials network dedicated to the rapid provision and trusted communication of postmarketing safety information.

The Pharmacy Research Network is systems-based.  It embraces the principles of continuous improvement and transparency.  It is also devoted to collecting the highest-quality standards-based data and is electronically connected.  It is easily understandable to the patient, something the patient can touch and see.

Basically, as I said before, it is systems-based.  There are really two feedback loops here:  One, tracking the patient, along with the adverse events, compliance, drug-interaction information; another loop that tracks communication with the patients, hopefully with the collaborators and the physician community.

The postmarketing safety network combines the natural advantages of pharmacies with technology, as well as the American Pharmacists Association’s reach, which allows for rapid deployment, flexibility, and cost efficiencies.

This is a partnership.  It is between the American Pharmacists Association, which represents nearly 60,000 pharmacists, and whose foundation for years has been engaged in research, education, and teaching, and I represent Applied Clinical Intelligence, which is a private company.  We specialize in leveraging technology and data to improve patient safety.

I am now going to turn it over to Ben Bluml from the APhA.

MR. BLUML:  Thanks, Jonathan.

The first thing I would like to do is to be a little bit bold and make an assertion.  I believe that pharmacists actually are the most dramatically underutilized resource in our health-care delivery system today.  I think there is a huge opportunity for these learned health-care professionals to actually engage in processes that can contribute in a meaningful way to safety.

We have engaged over 200 qualified research sites, several hundred pharmacists, and the various dots across the map here, in various Phase 3 and Phase 4 clinical trials over the last few years, with a variety of partners.  We have also engaged a number of practices in different types of advanced care delivery models.

Let me take you on a little bit of a virtual tour here.  This is a picture of a pharmacy in Richmond, Virginia that has participated in a number of these activities.  It looks fairly traditional.  Maybe the drop-off and pick-up counter is a little larger than you are used to.  But the bottom line here is to remember also the context of pharmacy.  There are over 250 million discrete visits to pharmacies every week in this country.  That is almost the entire population of the country passing through a pharmacy.  The idea that you have a highly accessible location here is oftentimes overlooked.  Pharmacy could have a unique contribution to the safety agenda.

While the pharmacy counter looks traditional, if you step back into their counseling room, they have been engaged in programs like Project Impact Hyperlipidemia, the patient self-management program for diabetes.  Another program that they weren’t involved in, but a number of other pharmacies were is in Asheville, North Carolina, called the Ashville Project.  They have all demonstrated collaborative patients, where patients, pharmacists, and physicians are able to work together and to reduce the incidence of adverse events and to improve clinical outcomes, and ultimately to reduce total costs for care to the system over time.

When you look at the criteria for engaging professionals -- to take a step back and look at it from a principal-centered approach -- pharmacists are primed.  That is, they have access to patients.  They have resources, both physical resources and human resources, in their practices to help them implement these programs.  They have information-management skills.  They have motivation.  Every pharmacist now graduates with a doctorate-level degree.  They have demonstrated results, where we have multiple sponsors that have told us that the clinical data that they have collected in these Phase 3 and Phase 4 clinical trials are top-drawer, and they would love to see this type of effort expanded.

Jonathan?

DR. SELTZER:  In addition to the human touch that pharmacists provide on a frequent basis, there is a lot of technology powering the system.  In order to track patient safety, drug-use events, we are combining electronic barcode tracking with ICOCR technologies and linking the pharmacy network to track communications.  We use interactive, collaborative technologies.  Finally, if there is a purpose for third-party real-time adjudication of safety events, based on clinical data standards, there is interactive, Web-based adjudication that can drill down to the patient level.

In conclusion, a pharmacy-based postmarketing safety surveillance network is flexible, rapidly deployable, and a highly transparent system designed to ensure proper medication use by providing and communicating the highest-quality numerator/denominator information regarding compliance, adverse events, and drug interactions.  We believe this system may, in fact, inspire public confidence in drug safety surveillance and provide a win/win for the public.  Dare I say also that this is a nice opportunity for industry, as well as for many of the parties involved?

We also think it is a very nice system for hypothesis generation, as well as confirmation and collaboration with some of the other people we have heard here today.

Thank you very much.  We will take questions.

DR. SHUREN:  Questions?

In terms of actually getting the information on a possible adverse event, how does that actually work?  Are pharmacists actually contacting patients on a periodic basis?  Is it the time when they come in for a refill?

DR. SELTZER:  There are probably a number of systems.  We envision this as part of an entire program.  There may be advertisements.  Industry may participate.  There would have to be some qualification to get into the system.  But there would be advertised presence of a drug safety surveillance program, in conjunction with people when they receive a prescription.

What we envision is that the patient would come in with a prescription to a pharmacy with the program, and the pharmacist would say, “Would you like to engage in this program?”  They would then get a signed HIPAA authorization and confidentiality, and would proceed at that point to a large-scale safety trial, with very limited information.  You pick up basic demographics concomitant medications.  But then your follow-up is essentially for additional concomitant medications and SAEs, with an authorized call-out in case of a noncompliance.

DR. SHUREN:  So the pharmacist would then make a periodic phone call if they have not heard?

MR. BLUML:  Yes, typically there would be follow-up on lack of adherence, which is one of the big challenges that are experienced in these programs.  We have found that implementing structure and process models for care delivery, in conjunction with order fulfillment or prescription pick-up, is a very effective way to increase those adherence rates.  We have seen rates go from industry standards of 30 to 40 percent at 12 months to 95, 96 percent.

DR. SHUREN:  If the patient then reports what sounds like something adverse happening as a result of taking this particular medication, is there any additional follow-up that occurs at that point and records kept of that?  Or is what is reported by the patient filtered by the pharmacist and then entered into a record?

DR. SELTZER:  It’s obviously filtered by the pharmacist.  At this point we would like to discuss collaboration with groups that could -- we have done collaborations where physicians are part of it.  But in terms of a formal program, we would probably look to other members of the group to make that last piece.  We believe the pharmacists can do and will be authorized to do an out-call to a certain level.  How we involve the physicians in it -- we have had differing opinions.

DR. BRAUN:  In the safety area, can you cite a specific example of your biggest success story?

MR. BLUML:  I think in the area of safety, the clinical-trial work that has been done by the pharmacists in the Phase 3 and the Phase 4 work has essentially resulted in numerous adverse events that have been identified and reported, targeted in the pharmacy and fed back to both the manufacturer and the AERs programs.  They have really been a concrete example of that.

DR. SHUREN:  Thank you.

Next, Donald Hackett, Community MTM Services.

 Agenda Item:  Presentation by Donald Hackett

MR. HACKETT:  Good afternoon.

If this presentation sounds similar to the previous one, it’s because we are doing the same thing, from a different perspective.

Community MTM Services, Inc., is a company that is focused on pharmacist-delivered patient-care services.  We are the second technology initiative of the National Community Pharmacists Association.  The first imitative, launched a few years ago, SureScripts, focused on the e-prescribing challenges and now is linking many physician-based systems with the leading pharmacy organizations, again focused on moving the e-prescribing transaction to a digital format.  There will be a lot of enhancements forthcoming.

Community MTM Services launched approximately a year ago.  It is a new company, again focused on taking a lot of disparate systems and needs, and producing technology that can be scaled and leveraged throughout the country.  That really means moving data between EHRs and PHRs, but from a pharmacy clinical system, which appears to be a new term.  The pharmacy on Main Street, USA, really doesn’t have a clinical system to use.  That is what we are developing.

Again, the goal here is really to link pharmacists, providers, and patients in a low-cost, scalable model that leverages technology, but delivers a very good user experience to all of the participants here, not trying to be cumbersome in the documentation or reporting process.

The conversation previously was about the Asheville Project.  If you really aren’t up on that project of a few years ago, it proved the point that if you integrate pharmacy into a local initiative, great results do come out of it.  That is an opportunity for a small project to be scaled throughout the country.  It takes some technology solutions, which we are working on, but it really gets to the whole point of a private-public opportunity.

Some of the lessons learned last year:  CMTM Services primarily focused on delivering CMS-mandated medication therapy management services.  We delivered about 40,000 cases at retail, which means a pharmacist received information coming from the client, the payer, which had identified the individuals based on their medication consumption and identified some interaction opportunities or some economic opportunities to switch to generics.  In this model, pharmacists, members of our network, contacted these individuals to come into the pharmacy at a convenient time.  They spend approximately 30 minutes with that individual going through the current meds they are on, the opportunities to make appropriate changes, deliver patient education where appropriate, and then document the entire process.

During last year, we learned a couple of things.  Number one, it has to be easy.  It has to be easy for the pharmacist and for the consumer as well.  The appropriate compensation for the pharmacist to spend that 30 minutes is required.  They have to be motivated to do the time, schedule the individual, bring them into the storefront, to have the facility to conduct that intervention.  Those are some items required there.

It has to be convenient for patients.  It is an appointment.  It is an appointment with a health-care provider in a white coat.  It has to be convenient and easy for that patient to participate.

From a workflow perspective, it is about data.  Last year we pushed the identified individuals to the pharmacists, to contact those individuals to have them come in.  Additionally, our network of pharmacists were saying, “Let us also identify the individuals based on what we see and what we hear.”  As you bring in that information, it is reviewed, the patient is contacted, it is documented, and then billed.  So this is a very seamless system that enables a pharmacist to be paid for delivering services in a scalable, organized manner.

Some of the action items, just this morning:  Again, we are a new company deploying multiple solutions.  Obviously, integrating the reporting requirements into our system will move up into the schedule for the summer.  The data-submission process -- instead of going off to a Web site, maybe we will integrate it all, so it is more effective from the pharmacist’s perspective.  Then we will obviously focus on educating pharmacists on what this is all about, how to track events, what adverse events are, and do that through some CME opportunities.

With that, I will again thank you for inviting us here today.

DR. SHUREN:  Questions?

One thing just to ask in terms of putting in reporting -- obviously, not reporting requirements for the pharmacists, but what FDA would otherwise expect to see in a report.  Will that be integrated just as part of the MTM service that is provided?  You are seeing this as an opportunity, if a pharmacist is seeing what they believe an adverse event, for fostering the collection of information on a potential adverse event and then sharing that with the government?

MR. HACKETT:  I think that is a great question:  How do we combine the MTM requirements coming out of CMS to meet some of the FDA goals about tracking?  I think there is obviously synergy there.  But again, how easy do we make it for the pharmacist?  Are they compensated for their time?  Are they educated?  Do they understand what they are doing?  There are a couple of items we would have to address.

DR. SHUREN:  Thank you.

Since we are running ahead of schedule, if folks are available, I am going to press on.  Let me ask if Hugo Stephenson from Quintiles is here.

 Agenda Item:  Presentation by Hugo Stephenson

DR. STEPHENSON:  Thank you very much, everybody, for the chance to have a chat today.

I am not sure how many of you have heard of Quintiles as a company.  Quintiles has market-leading experience in running postmarketing clinical trials.  We run postmarketing active surveillance studies for a wide range of sponsors, across a range of countries.  In fact, sitting behind me are a lot of the people that run a lot of the commitment studies that come out of the regulatory process.

In fact, many people view us as an expert independent source in the industry.  We have decades of experience working with the industry to solve difficult challenges.

One of the things which I enjoy the most about my job is that we see some of the best practices and the worst practices when it comes to operationalizing elements, particularly such as active surveillance, commitment studies, and so on.  While we all see the value and the importance of physician-reported data, one of the things that we have continuously found, as some of the other speakers have said, is that a system that relies upon physicians to step out of their workflow to report data is inherently flawed.  Yet currently the industry standard is to create more and more doctor-driven sentinel networks.  By “doctor-driven,” don’t mean doctor-reported data, but where the doctors have to step out of their workflow to do this.  We ask doctors to do more.  We are asking doctors to run more clinical trials, more observational studies, or take better care of their medical record maintenance.

That works, maybe, within a few practices in Massachusetts.  It may work within a particular payer group.  But does it actually work on a national scale?  Why do these projects get up to a certain level and don’t continue beyond that?  Doctor-driven sentinel networks are expensive, as Dr. Goldman said.  We have to spend a lot of money to get doctors to drive this.  They are highly inefficient, and they are difficult, particularly here in the United States, with the multi-payer system, to incentivize doctors to actually do this.

Let’s look at some of the numbers.  I do this every day.  From 100 doctors that I write or contact by phone or contact by fax or go out and physically see to ask them to participate in a sentinel network or a study or active surveillance, four -- four out of 100 doctors -- actually even respond to participate.  Of those four, one will recruit a patient.  One doctor will recruit a patient.  Of that one, between 50 and 60 percent will still be following up that patient after 12 months.  This is across protocols.  This is not one company, one design.

So the net effect of all this is that I am capturing about .5 to .6 percent, through all these commitment studies that I am actually running, to yield safety information here.

This is not a surprise.  When we look at physician participation in passive surveillance, we are looking at 5 percent or less participation and support.

I put it to a group that is looking at a solution for active surveillance and sentinel networks that those numbers are just not good enough.

I would like to propose a new model and a new system for sentinel networks.  Just to the people in the audience, how many people here actually take a prescription medicine?  Put your hand up.  Leave your hands up if you would like to receive safety alerts and updates about the medicines that you take.  Most of us keep our hands up.  Most of us are interested in receiving this information.

I would like to introduce you to a new approach to a Sentinel Network today.  We call it iGUARD.  It is the first and only patient-driven national drug safety program that enables consumers to take an active role in managing their health.  People who buy a new car take their car in for a service regularly.  They get their little key rings  and so forth.  But at the end of the day, what we don’t see is that behind that is a massive quality-assurance data-collection system that helps manufacturers understand their cars better.

iGUARD is just like the customer-care program that we have for our cars.  Patients volunteer a little bit of information about themselves to participate, in order to get these alerts and updates.  They tell us what drugs they are taking, the conditions they have, and a few tick-box questions.  We follow up with those patients on an interval basis, three months, six months, and six months thereafter.  If anything turns up that suggests that an SAE could occur, we contact the doctor and we flesh out a MedWatch form.  This feeds back into the process.  I want to give you guys the 95 percent of forms and reports that you are actually missing -- still doctor-controlled data.

So we are fleshing that out.  At the same point in time, we are able to provide a denominator for the clinical research, because I have the consumer-based data at the background.  In return, as we learn more and more from inside and outside the iGUARD network, patients and their doctors get timely and relevant information about their drugs.

The thing which I want to tell you about consumers -- we have done a lot of consumer research.  They don’t want to be drug experts.  They don’t want to think about drug safety day in and day out.  They are not interested in the meetings that we are having here.  All they want to do is live their lives.  They want a signal.  They want an alarm, like a smoke alarm.  They want to know that they are going to be told.  They want to know that they don’t read about this stuff on the front page of The New York Times, and they can just get on with stuff.  We have seen an amazing amount of consumer interest in a program like this.

I am going to round up.  It’s an exciting new development in health care.  I think we are putting the patient first, for a change, and mobilizing patient interest in driving safety going forward.  It’s the first patient-driven approach to drug safety.  We have been taking paper registrations in this program for the last two weeks, and we will go live next month with a Web-based registration process.  I have signed up.  My family has signed up.  My parents have signed up.  I suggest to you guys, if any of you guys care about this stuff, go and sign up as well.

The consumer is driving decision making in so many other areas.  They have energy, they have drive, and they have desire.  They can fuel a Sentinel Network that transcends payers, transcends states, and transcends a hell of a lot of other stuff.

I hope in a year’s time we will be able to stand up with a whole lot more information out here.

Thanks very much.

DR. SHUREN:  Questions?

DR. BUDNITZ:  Thank you for your participatory presentation.  How is this whole system paid for currently?

DR. STEPHENSON:  The system actually has multiple partners.  It doesn’t involve industry funding at the moment.  It is supported by private capital.  But I think the thing which we all see is that there is tremendous value in being able to build something like this at a consumer level.  That is essentially where we are looking for the time being.

DR. SHUREN:  You mentioned that you would provide, certainly, patients with safety information as it comes out.  What is the source of the information that you are providing?

DR. STEPHENSON:  I can go into the technicalities.  One of the things that is very different behind this process, though, compared to others that I have heard here -- the data at the backend is open-source.  This is not a data sales model.  Basically, any qualified researcher that wants to access this data and look at it has to agree to the fact that patients are volunteering their data for analysis on the basis that they don’t read about this on the front page of The New York Times.  Anybody who wants to access the data can do it.  Obviously, there is a governance process for looking at it and reviewing it.  But at the same point in time, that data has to be fed back through the patients that are part of the network as a priority first.

So there is feedback from iGUARD-generated data, but there is also feedback from other systems.  The FDA label changes and so forth freed through the process, but also other sources of content feed through, as part of the research network.

DR. SHUREN:  Thank you.

Let me ask if Margaret Binzer from McKenna Long & Aldridge is here.

[No response]

We will come back to her.

Duane Steffey from Exponent?

 Agenda Item:  Presentation by Marta Villarraga and Duane Steffey

DR. VILLARRAGA:  Thank you.  Good afternoon.  Obviously, I am not Duane Steffey.  Duane Steffey is behind me.

Thank you for the opportunity for providing Exponent to speak at this public meeting.  My name is Marta Villarraga.  I am a principal engineer at Exponent in medical devices.  With me today is Duane Steffey, director of statistical and data sciences, and Jordana Schmier, managing scientist and health scientist.

I will begin the presentation and Dr. Steffey will conclude it.

Exponent is a 40-year-old engineering and scientific consulting firm with a national and international presence.  We actually began as Failure Analysis Associates.

We solve complex technical problems by forming multidisciplinary teams of scientists, physicians, engineers, and statisticians to perform the needed research and analysis.  Our full-time consulting staff represents more than 70 technical disciplines, and we draw from experiences that we all bring from industry, academia, or government.

We bring a perspective here today based on our experience in failure analysis of medical products, in the development, maintenance, and use of large electronic population-based databases for risk analysis, from the risk identification and risk analysis methodologies that we have been using over the years, and from risk communication.

The feasibility and future success of the Sentinel Network will require addressing the developmental needs of the three key components -- surveillance, assessment, and communication -- all of which are important elements of risk management.  While commenting on each of these aspects, we will make reference to the specific questions posed by FDA for this meeting.

Effective surveillance will require a well-engineered system to collect data for decision making, while minimizing respondent burden and preserving patient confidentiality.  Current medical product safety data-collection systems do not fully consider and track hazards and factors such as those described in ISO 14971 that could harmonize the data collected.  There is lack of complete information on the performance of successful products, on events when there are product complaints and no product is returned for evaluation, and on total exposure, to enable us to make a meaningful assessment of risk.

Furthermore, there is little incentive for voluntary reporting, due to lack of immediate feedback.  Adaptive, automated, advanced control systems should be considered as models to fill the data gaps, given their demonstrated ability to handle complex information and manufacturing processes.  Usability evaluations will help these systems in development to collect better data more easily.  For example, the California newborn screening program, cancer registries, and international orthopedic registries, and even the registry presented by Kaiser this morning, should be reviewed in light of this.  These are mature sentinel systems that provide feedback to clinicians, while progressing toward more long-term goals.

MR. STEFFEY:  The deficiencies of current data-collection systems severely limit their utility for risk assessment.  Multiple reports for the same event cannot be linked automatically, potentially affecting the accuracy of incident counts.  Lack of information on exposure precludes the comparison of adverse-event occurrence rates within and between classes of medical products.  Limited information on hazards causes difficulties in accounting for the effects of confounding factors.

Because most risk analyses are conducted using observational studies rather than controlled experiments, care must be taken in making comparative risk judgments.  Some commonly used risk-identification and analysis methodologies may prove inadequate to the assessment tasks associated with a real-time surveillance network.

Alternatives, such as propensity score analysis, offer the potential to adjust for measured hazards in assessing risks.  Already identified for use in medical device clinical trials and mentioned in previous discussions today, Bayesian statistical inference is well suited to risk identification and analysis of continually updated medical product safety data collected during postmarketing surveillance.

Dynamic models, such as neural networks and classification trees, used successfully in other contexts for the prediction and optimization of complex system outcomes, should be considered for health-care surveillance loops.

Interpretation of surveillance data and model-based findings should involve multidisciplinary teams, with active participation by clinicians and epidemiologists.

Health-care tracking networks, such as those used in newborn screening and bioterrorism tracking, should be studied for their data-collection, analysis, and dissemination capabilities.

Effective communication in a medical products safety data system involves dialogue among all stakeholders.  The deficiencies in existing data-collection systems raise issues regarding when to notify, what to say to whom, and how patients and providers will be affected.  Successful communication requires addressing these deficiencies.

Graphical methods for expressing such concepts as relative risk and the magnitude of rare events can help provide a perspective on risk that facilitates evidence-based decision making.  Involvement of health practitioners at this stage is also essential.

State-of-the-art technology and processing techniques should be employed to provide secure, efficient, and transparent experiences.  Extensive validation and error checking will be needed to produce high-quality data for risk evaluations.  Customized and intuitive graphical user interfaces will maximize the accessibility and usability of data.

Thank you very much for your time.

DR. SHUREN:  Any questions?

[No response]

Thank you very much.

Ed Helton, CDISC.

 Agenda Item:  Presentation by Ed Helton

DR. HELTON:  Good afternoon.  I am delighted to be here.

I know everybody knows about CDISC.  I don’t want to bore you with that.  But I always like this slide.  I call it “the motherboard.”  It shows all the of the standards that are under development.  I am chair-elect of the CDISC board and also the co-chair of the HL7 RCRIM committee.  We are working very hard on the semantic interoperability of these standards, because we think that is very much a part of the path.

For example, we are about to start our second pilot with the FDA.  It is going to focus on safety.  We are going to look across therapeutic area and therapeutic class.  We just feel that this semantic interoperability is core to a Sentinel Network.

A very complex slide, but I am just trying to point out again that we are working to make the adverse-event reporting forms -- as we know, there are many -- interoperable, have semantic interoperability.  Our SDTM-AE domain will map quite well to the ICH E2B, ICSR, and we are also working on the basal adverse-event reporting form.  Here again, we realize that to make a Sentinel Network functional, you have to have semantic interoperability among reporting vehicles and formats, so to speak.

My last slide is this -- a very busy slide.  This is our roadmap.  This is all of our data models and how they work.  I brought with me today Landen Bain, who is the ex-CIO at Duke University Medical Center, and before that, the CIO at The Ohio State University Medical Center.

What we are going to focus on here is CDASH and RFD.  Actually, we are going to focus on RFD, but I want to tell you very briefly that CDASH is the standard harmonization collection forms.  It is basically item 45 in the critical path.

Once again, we have to have standard data-collection forms to make a Sentinel Network work.

Lastly, that RFD is “request forms for data collection.”  Landen is our health-care link.  He runs that team.  He interfaces with the industry.  We are doing a joint effort, integrating the health-care enterprise.  We have quite an effort going in that regard.  I give you Landen Bain.

MR. BAIN:  As Ed said, I am working in the area of linking CDISC to the health-care community.  I am going to talk about this Retrieve Form for Data Capture -- Ed was close -- which is a collaborative effort between CDISC, on the clinical-trial side, and IHE, Integrating the Healthcare Enterprise, which is a standards-related group on the health-care side.

Our work specifically addresses question number 2, which deals with the point of care and deals with the issue, which the Quintiles gentleman expressed very clearly, about not asking a physician to step out of the workflow, the patient-care workflow.

I would also connect it to question number 9.  It is a worthwhile, small-scale project that has immediate value to the Sentinel Network work.  I say “small-scale”; I think it has a large impact.  It is a public-domain, standards-based initiative, which I think can help put this Sentinel Network together.

The slide here shows a situation which is common today in the health-care and clinical research site.  The individuals in the middle -- their primary workflow is with the EHR.  In addition to that, they are often asked to interact with a number of devices and systems which are connected to these external agencies.  What we have attempted to do with RFD is to align that work so that the EHR itself becomes the broker for the data links to the external agencies, be they drug safety or public health organizations and so forth.

 How does this work?  This is just a little snippet from the IHE RFD profile.  What it shows is that this work is done through forms.  The form filler -- in most cases we are talking about the EHR being the form filler -- will reach out and bring back a form in a standard format, an XForm, carrying a CDISC ODM message.  That form is then brought back to the EHR, and the persons in the clinical site can complete that form, with assistance from the EHR, without having to leave their primary relationship with that EHR.

We put this together -- CDISC with IHE -- we put together a demonstration for the HIMSS 2007 conference, which was last week in New Orleans.  The companies that you see there each sponsored a scenario that used the RFD.  Two of those, I think, were particularly relevant to the Sentinel Network.  One was the biosurveillance scenario that was put together by SAIC, along with the Centers for Disease Control and the collaborating companies and agencies that you see there.  Then Pfizer sponsored a drug safety scenario -- my participating colleague from Pfizer will be up next -- along with Allscripts, the EHR system, and several others.

The interesting thing there was that you could actually see a drug safety form pop up inside of the Allscripts Touchworks system.  The data are captured as if it were part of the EHR, even though that form is actually a guest form.  The data then are returned to the safety organization.

That completes my remarks.  I would be delighted to answer any questions.

DR. SHUREN:  Questions?

In order to populate from EHR to a form -- you mentioned these would be CDISC forms -- were you talking about a particular EHR system, with requirements to use certain terminology?

MR. BAIN:  The Retrieve Form for Data Capture profile is an easily adopted profile which we would like to see all EHRs adopt.  In the implementation of that, an EHR can do more or less auto-population of that form.  That actually stands outside of the RFD profile itself.  So the form comes in.  It’s a standard form.  They display it.  If they do nothing but display that form, you save one step of work.  The physician-investigator no longer has to go to another system or to another Web space to fill out that form.  In fact, all of the scenarios which we demonstrated in HIMSS also showed the EHR auto-populating data from that form, as much as possible.  Then the clinician or the study coordinator was able to supplement that with data that weren’t readily available in the EHR.

DR. SHUREN:  What do you need to sort of auto-population?  You will have the form.  You have a data field.  You have, essentially, a tag or code for that.  Is it then expected that you would be able to auto-populate only if the EHR had similar fields, similar codes?

MR. BAIN:  This gets into the link to the CDASH project.  In fact, we are working on subsequent specifications with IHE, so that if we have CDASH where there is a known set of data elements that are going to be part of every electronic case-report form, then we publish that to the EHR community and they make certain that those particular data elements are ready to hand and can auto-populate the form.

I would point out, though, that I think there is an irreducible set of data that will always be captured ad hoc, in the moment.  We can auto-populate at least a third, I think, and in some cases more than that, and then hand the study coordinator a short form, which is only the data that are particular to that case.

 DR. HELTON:  One comment.  The CDASH mapping and the domains that are in CDASH are the SDTM standard.  It is part of the study data specifications for a common technical document.  So we are coming back from the FDA submission of efficacy and safety data and going backwards through to the EHR, so that there is compatibility from the very beginning.

DR. SHUREN:  One last question.  What I am hearing is in EHR there is information that can be used in a variety of different ways.  There are a number of different forms, and there are electronic standards being set.  Within the work that we need to have done for a Sentinel Network, do you actually see that there are additional standards that still need to be developed?

MR. BAIN:  I would imagine so.  I can’t give a definitive answer to that.  We did use the ICSR/E2B standard in the work that we did with Pfizer.  Mike Ibara might be speaking to that.

I think the first step is to exercise the existing standards and see how far we can go with that.  What the RFD profile does is to bring these two standards worlds together within a framework, and we can sort of see how far we can get with that.  I think that will help us understand what the next steps are in terms of standards development per se.

DR. HELTON:  I think it’s also important to remember what I showed in what I called the motherboard slide.  We know that a lot of that data is in HL7.  We put a lot of effort into building a semantic interface between the FDA CDISC standard and the HL7 structure.  It may not be perfect, but I think it’s a very large first step.  As Landen was saying, we are certainly not above making changes as required to make this work.

DR. SHUREN:  Thank you.

Let’s go ahead and take a break.

(Brief recess)

DR. SHUREN:  Our first presenter for this afternoon will be Mike Ibara from Pfizer Corporation.

 Agenda Item:  Presentation by Mike Ibara

DR. IBARA:  Thank you very much.  Thanks to FDA for the invitation to speak on this very important and very timely topic.

I am going to talk about something that I think is very simple, very straightforward, and therefore, I hope, very doable.  Building on the talk that you just heard, this is Pfizer’s participation in the CDISC effort to use the Retrieve Form for Data Capture profile in pharmacovigilance.

Just a prefatory remark.  What got us interested in this -- we are of the opinion that at this stage in the game it’s difficult to remove the clinician yet from a Sentinel Network.  When we look at the history of applying information technology to medical diagnosis, I think there is an important lesson there for us.  When we attempted to replace the clinician diagnosis with software, we were ultimately unsuccessful.  But once we began to try to enhance their ability to make a diagnosis, it made a tremendous difference.

If you use that reasoning in adverse-event reporting, we would like to support the clinician in their role in recognizing adverse events, understanding that ideas taken from syndromic surveillance and things like this are very important for the backend and can also have application for the front end.  But first we should concentrate on supporting the clinician in the role.

We took some very first steps with this.  This basically is our attempt to put our money where our mouth is.  We understand that there is a possibility to achieve something here, and we wanted to be able to demonstrate it.

So our first goal was to demonstrate the technical feasibility of doing this, of simply providing physicians the ability to report adverse events in a more seamless manner.

The second was to demonstrate being able to use an EHR to do that, and also, then, to stimulate additional work and thought in this area.

Briefly, the scenario that we used was a physician in the office, working with the EHR, with a patient or having just seen a patient, and recognizing that there is an adverse event to report.  He is able to summon up a data-capture form, as you heard previously, from within his EHR, provide information on that event, and then submit that form back to whomever and remain within their clinical workflow and their EHR.

Just to describe how we actually did this, we put together a team, along with CDISC, of volunteers to work on this.  Sentrx, Allscripts, Relsys -- we enlisted their help to do this.

The standards that we used were XForms, which is part of the CDISC standard, the CDISC operational data model standard, and then, as was mentioned earlier, the international standard for safety reporting and exchange of safety information, the ICH E2B standard -- or, the way it has been incorporated into HL7, the ICSR standard.

We mapped the ODM to the ICH E2B format.  This was not an onerous task at all.  There were only about five fields, as I recall, that we had any question about at all, on how to map them.  With a simple step, what we basically were able to do was enable transmission immediately from the physician filling out the form directly into an international standard for safety reporting.

One of the things that impressed us with RFD is its flexibility.  You saw this briefly with Landen Bain’s presentation.  We have a form-filler function, the RFD form, a form manager, which can sit anywhere -- in one example, it could be at FDA, managing the actual MedWatch form -- and then the form receiver and the form archive.  It could be the FDA, it could be the manufacturer, or it could be a public-private organization.  So there is great flexibility in this model.

Just as a vision of how RFD could be part of a Sentinel Network, we may have the provider/patient -- and we could work on developing some syndromic surveillance assistance for the physician at the point of care.  At that point, he would use RFD to report the adverse event.  RFD would basically be stripped and converted into an E2B file.  It could be then, for example, sent to a public-private organization.  In our visioning exercise internally, we have been looking at organizations like CRIX, the Clinical Research Information Exchange model.  In any case, whatever public-private organization takes that information could then -- it is already in E2B format -- it could be sent on to regulators or manufacturers, as needed.  That is more, I think, of a policy decision at that point.  It is certainly technically feasible.

The next steps for us are trying to amass some real-world experience with this, look at the development of a user interface, pre-population of complete clinical data in the form, and studying it in an actual clinical setting to look at the actual burden, the reporting rates, if it does what we think it should, which is improve the reporting rates and the data quality.  We are interested in pursuing this now.  We haven’t started that yet, but they are our next steps.

In trying to do this effort, I am always reminded of a quote -- this was 10 years ago -- from Herbert Simon:  “A system design suitable to a world in which the scarce factor is information may be exactly the wrong one for a world in which the scarce factor is attention.”

I think for drug safety, we have lived with a scarce data model for so long -- throughout my whole career in safety -- that we have to remember our success for Retrieve Form for Data, and for many things, I think, will depend getting a deeper understanding, an ontology, of the safety domain itself and then a willingness to proceed with our large-scale technology changes with the organizational changes that we will need in order to make it successful.

Thank you.

DR. SHUREN:  Questions?

One of the things you mentioned is, in moving forward, trying to make this more user-friendly for practitioners.  One of the things with the ICSR is that you arguably could use different-style formats.  You can configure that lots of different ways.  Are you going to look at, for some of the fields, whether there is a more user-friendly way to gather that information , short of just simply, “Here’s the field.  Fill it in.”

DR. IBARA:  Initially, we would like to design it, obviously, using what is there.  But already in some of our design discussions, we have found that this opens up the door directly to improving the vocabulary and the semantics of what is collected, and also, possibly, improving what data is collected.

One example that we have is, in speaking with various EMR vendors, we have discussed the fact of the great value there would be in having a field for the reason drug was stopped, for example, as part of the normal collection of information.  One of the things we would like to look at, as an offshoot of this, is trying to collect that information and then collecting it in a form so that we have a chance to look at improving the quality of what is collected as well.

DR. SHUREN:  Thank you.

Next, let me call for Frederick Rickles, Noblis.

 Agenda Item:  Presentation by Frederick Rickles

DR. RICKLES:  Thank you very much for including us in your proceedings today.  It has been fascinating.  We have learned a lot.

Noblis, formerly Mitretek Systems, is an organization that is committed to work in the public interest.  We are a unique nonprofit organization.  We don’t have shareholders.  We don’t have specific constituents or commercial interests.  We have over 500 professionals who are thought leaders and subject-matter experts.  We are a national resource in science, biomedical research, strategy, and technology.  We have both intramural and extramural research activities, and we provide technology support to 37 different federal agencies and 36 state governments.  We work on a variety of issues in health care and society:  knowledge management and integration functions, which apply to a great many of the issues discussed today; solution prototyping; and a variety of implementation of new technology-enabling capacities.

I will mention RASMAS in a little more detail in a moment, because that is a system that I think bears on many of the issues discussed today.  It is focused on product and safety alerts, but potentially is a platform for doing many more things.

RASMAS has provided a platform of health-care organizations to do their standard workflow operation in a much more efficient way and to integrate risk and safety management into clinical practice.  It has been very effective at doing a number of things for a number of organizations.  It is a Web-based subscription system for safety alerts.  It deals with products recalls -- everything from children’s toys on the pediatric ward to lung-transplant tissue that has been very effectively utilized -- with about 6,000 users and over 300 health-care facilities, 200 hospitals.  It has been up and running for about two years, and so it has had a remarkable market penetration in a relatively short period of time.  It does both device and product safety alerts, as indicated, in about 11 different domains.

The primary goal has to been to reduce risk-days.  In two years, it has gone from 26 days to approximately two days, on the average, of turning around.  It is not a passive system; it is an active system.  It is an example of what Noblis likes to do in the public interest.

The problem we have talked about today -- and I won’t belabor this -- clearly, the IOM report, we think, is a blueprint for the future, in many ways.  Perhaps the Enhancing Drug Safety and Innovation Act will provide some of the wherewithal for FDA to move this along.

The obstacles have been discussed in some detail today, and I won’t belabor this.  But using the FDA’s data from 2005, it is clear that a lot of the postmarketing/post-licensing trials just simply don’t get accomplished.  To some extent, we think this may be a factor of the fact that Phase 4 trials are usually managed as a marketing effort and not as part of a science-driven enterprise.  We are very anxious to see this become a much more science-driven enterprise and are encouraged by what we have heard today across all of the presentations.

Clearly, there are difficulties, which everyone has articulated, in logistics, participation by physicians and patients, a non-universal electronic health record that is neither easily adopted nor adapted to mine essential data, and, of course, many more of these issues.

Possible next steps, we think, involve an adaptable platform.  It is possible, for example, that RASMAS could serve as a post-licensing surveillance mechanism.  It has certainly proved its ability to handle recalls.  Electronic health records and quality-of-care measures have been addressed extensively.  We also do a great deal of research on knowledge-management tools needed to detect both low-incidence toxicities and higher-frequency events superimposed on high backgrounds, as we have discussed today.

So the design and implementation of clinical trials that can work effectively in the community is a high priority for Noblis.  We hope to be able to extend our capacity to work across a number of federal agencies, all of whom are represented here today, to be a facilitator in this very important area.

Thank you.

DR. SHUREN:  Questions?

[No response]

Thank you.

Jon Morris, ProSanos Corporation.

 Agenda Item:  Presentation by Jon Morris

DR. MORRIS:  I wanted to take one second before the clock starts and give you a bit of context on, I think, a couple of things that we have heard over the course of the last few presentations.

There are three different classes of things that people have been talking about (that is maybe an overgeneralization):  What do we do with the existing data that is out there?  What are we going to do with go-forward new systems to be able to drive new data capture, new involvement of providers?  Then there is a set of rules and business processes to be able to leverage and make those first two come to life.

It’s exciting to see Chris Chute still working in this, to see Landen and the group involved with the standards and CDISC.

What I am going to spend a few minutes on is the data mining, looking at existing data out there.  I don’t doubt that we have to go to a system where we get providers, whether it’s the pharmacists or the physicians, engaged in more data capture.  We are also sitting on volumes and a wealth of data that we are not leveraging today to understand what is happening in drug safety and to be able to even begin to activate a Sentinel Network, much less how that is implemented.

This presentation -- you can start the clock -- is actually work that we are doing today with Kaiser Permanente.  The Pharmacy Analytical Services, KP, is a long-time partner, and then ProSanos -- specifically, in looking at the questions coming out and the context for the meeting, looking at opportunities, public-private collaborations for building data collection and for the risk identification and the analysis components.  ProSanos is an analysis company.  We have a series of applications and products for data mining, looking at patterns and relationships specifically within drug safety data.  Kaiser Permanente you all know fairly well.

KP -- just a bit more of data, and you heard about it earlier -- has more than 6 million patients with integrated electronic health record data.  They have an early pioneer in terminology and ontologies, being able to integrate, bring multiple different things in.  It is not only labs, the presence or absence of laboratory studies being done, but it’s the results of those labs in an integrated electronic format, diagnoses, radiology, as well as their inpatient and their outpatient EMR data.  Independent of what you read in the newspapers about whether it has been implemented and how many billions of dollars, it is a system that is up and running.

ProSanos’ tools are being utilized today in-house within Kaiser Permanente for member safety screening.  It is specifically signal detection and signal evaluation on a subset, about 3 million patients of de-identified Kaiser members.  We have a vendor agreement in place with them.  Basically, KP Pharmacy Analytic Services has program-wide operational responsibility to assure member safety and then implement what they see in drug safety to try to improve member care.

So what we are looking at with Kaiser is a dynamic surveillance loop.  The clinicians are involved.  The EHR data -- they are running our applications to help detect and then evaluate the signals that they see, interpreting that.  There is a research infrastructure.  As you know, Kaiser has a tremendous amount of effort going on internally in terms of their research.  Then the PAS group is responsible of operationalizing that.  That may be, essentially, something as radical as changing formulary and whether a product can be presented to a physician.  It may be something as something as another rule in the decision-support system.  The KP PAS has that.  There is the opportunity, then, to activate what we are talking about in the Sentinel Network.

So it is a dynamic and robust data source.  You have heard everything from Partners, Mayo -- there are a number of organizations, obviously, who are driving and capturing real-time health-record data today.  It is continually updated, which is valuable.  It is also multidimensional.  It is not just physician-providers; it’s nurse practitioners, and there are also the patients themselves -- although not quite at the whole iGUARD level.  That is an exciting thing to do.

If you look at a Sentinel Network, essentially, an activity is, if you make a change, if you implement a decision-support rule, then what happens?  It could be something where there is a black box or a new warning.  It could be something in terms of a care plan that changes within Kaiser.  You need to be able to look over time and see what happens to that patient population.

You heard from the GSK folks.  Alan spoke earlier, looking at events and how events fit within the various drug products.  You need to generalize, when you are looking at health-record data, beyond an adverse event, as identified in MedDRA.  We heard about the issues with MedDRA this morning.  In a drug exposure, something happens; an event occurs.  It could be a diagnosis.  It could be a test that is ordered.  It could be an admission.  All of those things have to fit into your model.  It could be an ER visit.  It could be the test results themselves.

Essentially, in the systematic detection of this, you need to look at disproportionate events.  There is a certain amount of background.  There are things that are associated with diseases.  You have to look at what occurs outside of that context.

I will show you just one picture here from the KP environment.  Many of you who are involved in drug safety are familiar with our PV map picture and, essentially, the progression of signals and serious events occurring over time, being able to track those, whether it is in AERs on a quarterly basis or within the Kaiser data, looking at a strong signal, the historical exposure, being able to do both a longitudinal view -- how often that occurred, how serious it was prior to exposure, and then what happens over time, 90 to 365 days post-exposure.  There is a time-based continuum that you have to take into account when you look at this kind of data.

Interpreting the results is not easy.  One of the challenges, I would say -- if you look at and you integrate the 10 or 12 or 15 different active data-mining or data-collection environments, interpreting the results is going to be a challenge.  You are going to need to be able to ask questions in a standardized way and be able to interpret those results in a standardized way as well.  Obviously, within the KP environment, if you have to go in and ask additional questions of a patient, that is going to require IRB or other kinds of approval, in order to keep things clean.

The big thing within Kaiser Permanente today -- they operationalized this, so they are taking it back and they are changing their business practice.  That is critical -- dynamic decision support, the fourth step, data, information, knowledge, and action.  Again, everybody who trained in surgery started with action first and the rest of the stuff you threw away.  But it is critical.  When you look at drug safety, you have to be able to use the knowledge, drive it back, and then evaluate your performance afterwards.

What we see, in terms of implementing this within the Kaiser Permanente PAS, as well as the analytic tools, is that activating designated nodes of that network is going to be done, whether it is an external stimulus that comes from the agency, whether it comes from the sponsors, or whether it comes from patient advocate groups themselves.  Essentially, those can drive it back.  Scheduled screening -- there are multiple ways of doing this.

I don’t want to underestimate  -- these things can be done today.  We have been trying for 10 to 15 years to integrate electronic medical records and clinical-trials data.  We can take the existing technologies, we can take data today and advance things a step.

This is an example of a dynamic surveillance loop that we are doing today with KP PAS, the business rules and what happens when you bring things in.  Specifically, we have heard a lot today about risks.  We haven’t talked about benefits.  I know that for, certainly, the sponsors ‑‑ Pfizer, J&J, and others -- there is obviously a balance.  You have to weigh the benefits and risks.

We are stressing here picking out signals and what you do with safety.  It still has to come back and be able to be adjudicated and say, how does it sit?  What is the balance of benefit versus risk? 

So the business rules, the operational rules, and then essentially being able to drive some funding to be able to link this together, I think, are all critical.

It does exist today.  It is going to get us some of the way.

Thank you.

DR. SHUREN:  Questions?

DR. CUNNINGHAM:  That is an excellent system that you have.  I have one question -- really, two.  Can you give an example of where you had an event, where you sent a warning to the practitioners and how you evaluated the implementation of your warning?  So what was the outcome of one of your safety events?  That is the first question.

The second question is, what is the methodology that you use most often to disseminate your warnings to your practitioners?

DR. MORRIS:  I am with ProSanos.  I am not with KP, although we do work together.

What I will say is that the way KP has implemented this internally has been in various classes of products.  When there have been notices or when the labels have changed, they have looked at the practice and they have gone back and in some cases they have taken that product off the prescribing list of what a physician can do.  They have added extra education on for the patient or for the providers.  They have also asked additional questions, to say, is this something that we are seeing?  Is this real?

That has been in asthma.  They have done it in diabetes.  There are a number of high-visibility disease areas where that is a significant cost driver for them and to increase member safety and member health outcomes.

So I would say, again, I am speaking for them, but not of them.

If you look at the opportunities to drive this, it has to come into areas where data makes a difference.  There are a number of critical disease areas, longitudinal disease, where we spend a lot of money from the health-care side and where we spend a lot of money from the pharmaceutical side.  Those are the ones that are going to be the optimal targets to first implement this kind of program.

DR. SHUREN:  Thank you.

Eric Sacks from ECRI.

Agenda Item:  Presentation by Eric Sacks 

MR. SACKS:  Thank you for the opportunity to present ECRI’s perspectives.

My name is Eric Sacks.  I am representing ECRI.  ECRI is a wide-ranging nonprofit organization involved in a variety of research activities, ranging from technology assessment to quality improvement at health-care facilities, through benchmarking studies, to technology management and procurement activities.

Since the late 1960s, our flagship business has been in objective research of medical-device technologies through comparative evaluation and through a problem-reporting network that hospitals participate in.  Since the early 1970s, we have been providing a safety alerting service.  I currently manage a system that provides automation for distribution of, typically, 20 to 40 recall and hazard reports weekly that are systematized and automated for distribution to specific clinical and other professional areas throughout the hospital for response and centralized data acquisition.

With all of ECRI’s activities, the focus of my comments will be with respect to safety alerting in the niche area of devices and with a high focus on capital equipment, which I don’t think has been addressed specifically quite so much as, perhaps, pharmaceuticals and, in the device area, implants today.

ECRI’s reporting systems are not restricted to medical devices.  For example, on behalf of the Commonwealth of Pennsylvania’s Patient Safety Authority, we run the largest medical-error/near-miss reporting system in the country.  Since the PA-PSRS program’s inception in June 2004, 775 participating health-care facilities have submitted over 450,000 near-miss and incident reports.  Based on that combined experience, we have several observations to share and then some thoughts and recommendations for the Sentinel Network system.

First, echoing Dr. Resnic’s comments about human analysis, based on our experience, we have found that the human expert analysis of reports and detected trends in incident reporting is essential to the isolation of valid hazards and developing practical solutions.

Next, with regard to device incidents, many critical device incidents are isolated from extremely small sample sets.  So it will be interesting to see, as the Sentinel Network develops, the ability of the system to detect and treat things like cases where surgical fires, for example, are occurring in specific types of procedures or over-infusion incidents are occurring with specific models of medical devices.

Some of the registry presentations today had to do with very careful record taking with regard to the implants and procedures that were used with specific patients.  As many of you probably are aware, with respect to capital equipment, that sort of detailed recordkeeping as to which specific devices are used on which patients is frequently not present.

Next, a large proportion of device events result from operator error or misuse.  But at the same time, the occurrence of defective products is still a significant risk.  In a world where there is a great deal of regulatory emphasis on defective process and recalls, the issue of education to address and avoid repetition of user error is a critical area.

Finally, device reporting -- this is echoing the statements of Dr. Reynolds with regard to the MAUDE database -- device reporting is inherently prone to poor signal-to-noise ratio and a significant level of misinformation.  If you consider the principal sources of medical-device event reports, they are health-care provider organizations and product suppliers -- the very organizations whose employees are frequently contributing to or causing user errors and the organizations whose products are sometimes contributing to or causing patient harm.

Finally, in consideration of the patients that need each day, for safe and effective care for their condition, as it ought to be treated in mainstream medicine ‑‑ I want to focus on the health-care provider’s need for a clear plan for navigating current hazards.  In examining the concept of public and private alignment of safety efforts, it is telling to reexamine the climate during 1973, when Congress was considering expansion of FDA’s medical-device regulatory responsibilities.  In testimony before the United States Senate Committee on Labor and Public Welfare, ECRI founder, Dr. Joel Nobel, described an environment in which many observers of the health-care community perceived that medical-device hazards were serious and widespread, and that health-care professionals did not react to these problems adequately.  This was the time of the microshock electrical safety scare.

Then as now, postmarket surveillance was heavily emphasized on identifying and isolating defective products and unscrupulous purveyors.  That certainly is a key function for a regulatory agency.  Dr. Nobel went on to testify that “few things are more frustrating or useless than being presented with a statement that an essential medical tool is dangerous, without being given a solution or alternative.”

Both in the case of products that are defective by design or manufacture and in cases where there are continuously repeated operator errors, these are key areas where the private sector can complement FDA’s work in developing and disseminating practical solutions and alternatives.

Towards advancing that collaboration, ECRI presents the following recommendations to FDA:

  • Reexamine the postmarket event reporting with an understanding of the limitations of both hospital and industry incident reports.  Involved parties frequently look right past underlying causes.  Invalid analysis arises from biases ranging from vested interests and being too close to the situation.
  • Support competent, objective, and timely research and analysis to guide the health-care community with unbiased perspectives on preventing safety risks, both reported and unreported.
  • Work to cultivate private-sector partners that can complement the regulatory efforts of FDA with more complete solutions for health-care providers in contending with the disruptions to patient care that arise from necessary regulatory actions.

ECRI is prepared to work with FDA on this new effort, but we will expect efforts to be explicitly and thoughtfully designed to be mutually beneficial to all stakeholders, with the needs of the patient’s held as the highest priority.

Thank you.

DR. SHUREN:  Thank you.

DR. GROSS:  I have a question related to the human-factors issues, the operator errors, design issues.  From what you have heard today, how can that information be incorporated into a sentinel surveillance system?

MR. SACKS:  What is required is the interpretation of the underlying causes as trends and alerts arise.  The activities that take place in a sentinel event-reporting system need to have decision mechanisms that help to quickly identify when design factors -- whether it be human factors in the design of the product or just something inherent to the use of a technology in patient care -- contribute to incidents reported.  In some cases there may be a human-factors design related to the way an electronic infusion pump is programmed that lends itself to factor-of-10 programming errors that cause over-infusions.

In other cases there can be very simple convergences of technology -- highly enriched oxygen in endotracheal airways and electrosurgical units used in head and neck surgery.  That convergence of technologies frequently can contribute to a highly hazardous situation that the clinician needs to be educated about and made aware of.

DR. SHUREN:  Other questions?

[No response]

Thank you.

Alex Frost from Sermo.

 Agenda Item:  Presentation by Alex Frost

MR. FROST:  I will read the title here, just because it might not be legible on the screen.  I have entitled this very short presentation “Utility of a market-based online physicians’ community to detect and clarify signals related to medical product safety.”

I think there has been a common theme in many of the discussions and presentations here, and that is the use of new technologies to aggregate and analyze new data sources and existing data sources.  The focus of my presentation here is a construct for new technologies to actually aggregate and mobilize people, to mobilize physicians on the front lines of medicine to be involved in more steps in the analysis and recording of events, both positive and negative.

Sermo is the company I am representing.  Sermo has been, in a very simplified form, described as “the MySpace for physicians,” by popular publications.  The technology underlying Sermo is basically building a community for physicians to interact in a peer-to-peer network.  In this peer-to-peer network, there are multiple incentives and broad incentives for physicians to share simple clinical observations.

Sermo has underpinnings of technology.  It is designed on social-network theory, with components of game theory, prediction markets, and an information arbitrage business model, which allows us to present this as a resource to physicians.  It has great utility for decision support, for community interaction, and peer-to-peer conversations.  As a result, on the backend, we have the capacity to aggregate information, look at the aggregate information, and identify emerging clinical trends.

The example that didn’t show in that slide is the simple data object for physicians to participate in the Sermo system.  It’s what we call a posting.  A posting basically couples a traditional linear discussion board with a quantitative component.  A posting through Sermo is not a complicated form in any way.  It is designed to be filled out in about 90 seconds.  But we find that the physician-users of Sermo are spending much more time, five or 10 minutes and longer, to fill out basically case presentations -- “I’ve seen this patient with this” -- and seeking input from their colleagues.

The quantitative component is a critical component in the structure, because it is designed to facilitate corroboration or repudiation of these individual events.  What this means is that it provides a very, very rapid tool to turn individual case reports into case series.  In practice, there are strong incentives for the physicians to participate in this.  They can present the report in the system.  We currently have about 10,000 physician-participants in the system.  They can present an observation into the system and expect that they are going to get a response from one of their colleagues within five to 10 minutes.  That is about the data velocity of the system as it stands right now.

What does Sermo mean in the context of adverse-event reporting, looking at medical trends, both positive and negative?  Sermo is really a new paradigm for listening to what is going on in the medical community.  There is frank discussion.  There are large amounts of activity, where physicians are using this now as part of their daily practice for decision support and other factors, where they can basically rely on input from their peers.  This Sermo system has also a large, growing dataset of case reports and feedback.

Other thoughts on this:  A key component in this is that the data in this system is active and interactive.  It is dynamic.  That means that data can progress through time.  This is not post hoc analysis of forms that have been filled out.  This really could be a very powerful complementary approach to couple information in the medical community and build signal-to-noise clarification, using this physician base.

In the context of the meeting today, there have been many provocative presentations.  One of my objectives in coming here today was to see how we can find partnerships and collaborations to bring our dataset into this sector.  Certainly, this is a qualitatively new dataset.  It is a new construct in the field of clinical informatics.  We can have active participation from outside parties in the system.  That means if you have a signal that needs clarification -- if you have a signal about some adverse event and you are wondering about causality or about its occurrence rates -- we have the capacity in the Sermo system to present questions into the system and literally get results from thousands of physicians in a matter of hours.

In terms of gaps in existing products and future products, in the qualitative information that is gathered and what we can do with that information, because this is an active network of people making decisions and presenting their ideas, this can certainly be used to fill in gaps where we see deficiencies in the existing protocols.

Steps for further evaluation:  One is coming here to meet people and discuss possibilities for integration of the Sermo system, in multiple ways, with alternative and existing data platforms.

Certainly we need to look at the space between the Sermo system, which is a very simple interface, very simple level for participation, and more complicated data, as in MedWatch forms.

Again, another success point in this is the continued expansion of the Sermo user community.  Right now we are at about 10,000 active physicians, after five months of activity.  We are on track and on scale for having about 75,000 participants by the end of the year.

I would be happy to answer any questions.

DR. SHUREN:  Thank you.

DR. BRAUN:  That was an interesting presentation and idea.  For members of the audience and panel who are interested in learning more, can one just type “Sermo” into Google and go there?

MR. FROST:  Yes, you can do that.  One of the components of the system, and one of the powerful concepts in this being a safe community for interaction, is that we built a real-time authentication and credentialing system.  If you are an MD, you can gain access to the system by answering a few challenge questions, and we will verify who you are.  But then you can come into the system, look at the data, and participate.

DR. BRAUN:  You said there were 10,000 physicians.  They are distributed among a wide variety of specialties?  Is that fair to say?

MR. FROST:  That’s correct.  Certainly at this point, we have physicians in all states, everything from tertiary hospital settings to individual, very small practices, and across all specialties.  We do not burden the physician-participants with putting in information on their demographic background.  However, in a de-identified way, we can couple all the data created in Sermo with pretty substantial information about the specialty, place of practice, and other demographic information about the practicing physicians.

DR. BRAUN:  Just two more quick ones.  How frequently do they visit, on average?

MR. FROST:  One of the constructs I described in this -- well, I will try to pull things from real data.  We had 365 new physicians register just yesterday.  One thing that is very substantial in this is that Sermo really can be grouped with Web 2.0 phenomena.  I don’t think there is a very close alliance, but there is some alliance in this.  This concept is to use online communities to present information and filter information.

Typically, online communities are composed a very high percentage of people who are just observers, lurkers, and a very, very small percentage of people who are participating.  There can be much less than a fraction of a percent of the community itself participating.

By creating a very simple quantitative mechanism, a survey -- “have you seen this case under these circumstances,” or, “do you think that there is a causal relationship between drug X and this syndrome or symptom” ‑‑ we have user participation rates of 15 percent and higher.  That is part of the structure and the simple data object -- so very, very frequent participation.

DR. BRAUN:  Can you briefly define, for those of us who don’t know, what prediction markets and information arbitrage are?

MR. FROST:  Sure.  The concept here -- fundamentals.  A prediction market is theory and application from the economics field.  The fundamental premise of prediction markets is that you can aggregate information from a group of people, in which no one knows the answer to a particular question, but if they have diverse enough backgrounds, you can aggregate that information effectively to come up with the answer.  Prediction markets in the health-care space have been seldom applied.  They were applied through a project from a university collaboration with CDC to create a flu prediction market, to basically predict the strains involved in the influenza season.  This has been proven to be relatively successful.

An information arbitrage model is simply a model of incentive and the pass-through of data from one side that holds the data to another side that seeks the data.  In this case, this is how Sermo supports itself as a business and becomes fully scalable.  We rely on being able to sell access to aggregated information to different sectors, such as government and academia, the health-care sector, and even investment industries.  By doing so, we can pass through additional incentives, through infrastructure, through some financial incentives, to the participants in our system.

The main aspects of this, and the fundamental aspects of importance, are that we are creating a large, robust interface, where there is very simple participation for the physician-users.  They have multiple incentives.  It has no advertising and can be space that is free of the perception of pharma influence.

DR. SHUREN:  Just a follow-up to that.  Are there financial incentives offered if someone reports an event for the first time?  Are there financial incentives offered if someone participates in one of the survey questions?

MR. FROST:  Good question.  For clarification on that, the current construct for the system, five months in, is that there are general small amounts of financial incentives for the physician-users for being good users of the system.  That can be presenting quality data, as is measured by our metrics of quality data.  That does not necessarily it is discussions of adverse events.  It doesn’t necessarily mean that it’s discussions of information that has a financial relevance.

We are collaborating with folks from the game-theory sector and the social-network-theory sector to create ranking and rating algorithms, along the lines of eBay user ratings, through which we can modify and adjust incentives for participation and reward people fundamentally for presenting good clinical observations.

DR. SHUREN:  Any success stories so far, five months out?

MR. FROST:  I don’t want to talk about specific drugs.  Sermo is not positioning itself to be in the business of analyzing this data.  That is where we seek to present it to clients and partners.  But success stories can be, for us, really in terms of data velocity and the capacity of this as a network for human decision making.  In our testing in the system, when we only had about 7,000 physicians -- and that would just be about six weeks ago -- we had presented a case out, and when we pushed a hypothetical case to the community, we were able to garner 500 responses in around two hours.

So in terms of the data velocity, it’s very successful.  When it comes to specific examples related to regulated medical products, I would be happy to talk offline.

DR. SHUREN:  Thank you.

Harry Fisher, Northrop Grumman.

Agenda Item:  Presentation by Harry Fisher

MR. FISHER:  Thank you very much.  My name is Harry Fisher from Northrop Grumman.

I know it has been a long day.  Fortunately, this is one of my shortest presentations ever.  I have to thank you for the opportunity for this, because I have one slide.  In five minutes, I can’t expect you to remember more than one thing, and, quite frankly, in five minutes, I don’t know if you can remember anything.

It is connectivity.  Why is connectivity important?  Why is it relevant for me to talk to you about it?

What we have done at Northrop Grumman are some things that are very relevant to this discussion.  The way I would describe that is, if you cannot connect in a Sentinel Network, you will not be successful at all.  But connectivity is not about systems.  I will give you an example.

I know there has been some discussion of MedDRA earlier today.  Northrop Grumman is the MSSO.  We are responsible for MedDRA.  We have been doing it for five years.  We were recently extended for another five years, both at our request and the MedDRA management board’s request.

What does that mean?  Since we are all friends here, I am going to tell you about MedDRA.  What I am going to tell you is, if MedDRA is your adverse-event reporting system, I suggest you immediately seek legal counsel, because you are in deep trouble.

What is MedDRA?  MedDRA is part of connectivity.  If you look down here, one of the things we have is challenges -- structure, taxonomy.  MedDRA is an adverse-event ontology.  That’s it.  If you are from the U.S. and I pick you up and drop you into Japan and you talk to a clinician in Japan, and he doesn’t speak English and you don’t speak Japanese, you can talk MedDRA.  You can get to the bottom of an adverse event.  That is very discrete, very purposeful, and very successful.

To give you a stat -- this isn’t about MedDRA, but it’s all about the connectivity -- if you think about a public and private partnership and what is relevant for sentinel, you have to have all the parties knowing what they are supposed to do -- very defined, very discrete.  It has to have demonstrable and measurable effects.

When I think about connectivity, what does it mean to the dollars?  The largest subscriber two years ago to MedDRA spent $87,000.  This year they will spend $67,000.  How many places have you worked, how many products have you bought that have had that significant a decrease in two years?

To further add to that example, one of the top five pharma companies that I recently have been speaking to has said, “We spend north of $200 million a year on pharmacovigilance.  Help.”  Well, $200 million a year, and your entire language is based on a $67,000 investment per year.  It seems pretty good.

But the answer is not about the ontology, for us.  When we look at connectivity, we see an explosion of data, an explosion of data that exists currently.  We see that some of that is actual information and precious little of that is actual knowledge.

I have some things up there.  Some you can see; some you can’t.  But let me point you to the thing over on the far right, which is AHLTA.  We run the largest EHR system in the world, the military health system’s AHLTA program.  We have about 10 million beneficiaries.  This is a genuine EHR.  There is an explosion of data in there.  But I can tell you one thing:  It has zero to little interest in finding adverse events for anyone in this room.

So what do you do?  You have to figure out how you can connect to AHLTA, because AHLTA is not going to connect to you.  If you go around that circle, all those concepts of connectivity come into play.

At the CDC, we are one of the largest providers of services at CDC.  What is CDC working on?  They are working on a public health information network.  What, also, are they working on?  BioSense.  They are trying to figure out the same types of things that FDA is trying to figure out, but they have a different angle on it.

So is the answer to duplicate their system?  Of course not.  The answer is, FDA has to connect with CDC, but not just CDC.  There is a whole wealth of public and private providers of information out there.  The question really becomes, does FDA want to build a system that they put in a box and drop into Fisher Lane or drop at White Oak and say, “Now we’re ready to go”?  The answer, in our opinion, is no.  You will never get to where you need to be.

But you need to figure out, how are we going to connect?  If you think about the variety of folks who spoke, if you think about -- just recently, at HIMSS last week, there were acres and acres of vendors who want to do nothing but sell services to anyone who will buy them, and software to anyone who will buy it.  We are a service provider.  We are happy to provide services to those who will buy them as well.

But the answer is, what is your return on investment?  The pure fact of the matter is, in a sentinel environment, if you buy disparate systems, if you have disparate languages, if you have disparate goals, you will never be able to connect, and I don’t think you will be able to realize the promise of sentinel.  There are competing areas out there.

But I would hark back to the fact that there are a lot of very inventive, very bright folks working on this, who have very novel solutions.  The key, in my mind, is that whatever is built within sentinel has to be open and has to be modular.  You don’t know, five years from now, what the latest and greatest will be.  Maybe it’s another Sermo.  Maybe Sermo is the greatest thing.  You have to have the ability to connect.  That is not just data.  That is processes, that is privacy, that is compliance, and all of those things.

So at the end of the day, if I can leave you with one thought, I leave it as connectivity.  We think about public-private partnerships.  In addition to a variety of things we do, we are already involved in a very successful public-private partnership with MedDRA.  We already are involved in the largest EHR in the world.  That experience points us to connectivity.

Thank you.

DR. SHUREN:  Questions?

We are not hearing anything because you are right on target.  That is what sentinel is all about -- connectivity.  We are not going to drop another box in Fisher Lane or White Oak.

Thank you.

Next, Denise Love, National Association of Health Data Organizations.

 Agenda Item:  Presentation by Denise Love

MS. LOVE:  Finally, I have the last word.  That doesn’t happen often.  I will make this very quick.

I am the executive director of the National Association of Health Data Organizations.  Since 1986, NAHDO has been helping state and private data organizations collect, analyze, and release health-care data.  I personally spent nine years of duty at the Utah Department of Health, building their statewide inpatient, ambulatory surgery, emergency department, and HMO reporting systems.  So we know at NAHDO firsthand that building a multi-enterprise and large data network is no easy undertaking.

That said, today, 38 states have legislative mandates to collect and release health-care data.  Ten states without mandates collect data as well, as a voluntary effort among hospitals.  The data that we specialize in at NAHDO are all-patient/all-payer data from acute-care hospitals, typically, across a state.  Each state varies a little bit in how they implement, but there are many commonalities.  We work through NAHDO to promote uniformity of these data systems.

Some of our states are now all-payer/all-claims data systems.  Twenty-eight states have patient safety reporting systems.  Twenty states are reporting health-care quality and cost data for consumers, such as infections and outcomes reports.  All states have vital statistics systems, including mortality data that are coded and textual, which may be of interest to any sentinel surveillance system.  Many states link those datasets across providers, settings, and data types.

In short, states have demonstrated that data use and access for multiple users and for multiple uses is possible while protecting confidentiality.

 I know it is late in the day, and I have been stressed out for an hour about how I am going to end this talk.  I want to make it short.

Some of the data realities in my work, both personally and through NAHDO -- I just thought I would do a little bad-news/good-news sort of thing.

The bad news is, there is never enough funding for any data system.  The good news is, it really isn’t all about money.  It takes political will, vision, and leadership.  Those are the things that really matter.

Bad news:  There is no perfect data source.  I don’t believe electronic health records are the holy grail.  So we need to make do with what we have now and build.  The good news is, we do have existing data in place.  We heard our speakers earlier talk about some of the data assets that we have in this country.  These data, if used, will improve.

One example is, NAHDO’s community collects administrative data.  Many criticisms about administrative data -- it did not have clinical data.  We could not tell what event occurred pre-admission or what the patient came in with or post-admission.  With the work that we have been doing with our partners and with standards -- and the good news is, CMS is now requiring for Medicare a present-on-admission indicator.  This has been long in coming, but it will add huge value to our data systems.

Some states are starting to look at laboratory data with their discharge data, to look at outcomes and refine those datasets for quality measurement.

Provider reporting burden is real, but building on existing systems, we can reduce that burden.

So in short, NAHDO recommends that in this exploration of the Sentinel Network we really look at leveraging state reported infrastructures.  There are legal authorities in place.  There are analytic capacities in place.  States are familiar and willing to work with federal partners.

We also ask that you join our efforts to enhance and improve the existing datasets.  Just because a dataset does not meet today’s reporting needs doesn’t mean that it can’t tomorrow, with the right data elements and standards in place.  We actively work with all of the standards organizations to improve the measures and the data elements to populate those measures.

Lastly, I ask you all to work with the states to explore a research agenda that our population datasets can support, so that we can inform future discussions about surveillance, tracking, and patient safety more effectively using population-based datasets that are in place.

Is anyone driving to the metro, because I need a ride?  And I’ll quit now.  [Laughter]

DR. SHUREN:  I have seen people promote software before, and other technologies.  I have never heard someone try to bum a ride.

MS. LOVE:  At NAHDO we are very creative.  [Laughter]

DR. SHUREN:  Any questions?

[No response]

All right, thank you.

I will give this one more chance, because I did call earlier than the allotted time.  Is Margaret Binzer here?

[No response]

We did not have on our list anyone who signed up for the open mike.  I just want to make sure that we didn’t miss someone.

Again, thank you for coming today.  This is part 1.  Part 2 is tomorrow.  Part 2 is going to be a very different organization.  There are not set presentations.  We are going to have an interactive dialogue, folks from the federal government, who will be much more vocal tomorrow, with our invited speakers, but also a chance for everyone in the room to weigh in as well.  We will not do sign-ups.  We will have mikes available, and it will be, come up, get in line, have something to say.  We do want to hear from you all.

People have been asking about slides, a list of presenters.  All of those will be made available on our Web site by close of business this Friday.  It’s the FDA address (Sentinel Home Page).  It will be posted up there by close of business Friday.

Anyway, thank you again.  We will see you tomorrow.

(Thereupon, at 4:30 p.m., the meeting was adjourned, to reconvene March 8, 2007.)