• Decrease font size
  • Return font size to normal
  • Increase font size
U.S. Department of Health and Human Services

Medical Devices

  • Print
  • Share
  • E-mail

Public Workshop - Mobile Medical Applications Draft Guidance, Transcript for September 13, 2011

UNITED STATES OF AMERICA

DEPARTMENT OF HEALTH AND HUMAN SERVICES

FOOD AND DRUG ADMINISTRATION

+ + +

CENTER FOR DEVICES AND RADIOLOGICAL HEALTH

+ + +

MOBILE MEDICAL APPS DRAFT GUIDANCE

+ + +

September 13, 2011

8:00 a.m.

 

FDA White Oak Campus

10903 New Hampshire Avenue

The Great Room (Room 1503)

White Oak Conference Center, Building 31

Silver Spring, Maryland 20903

MODERATOR: BAKUL PATEL, CDRH, FDA


 

PRESENTERS:

KRISTEN L. MEIER, Ph.D., FDA

P. JON WHITE, M.D., Agency for Healthcare Research and Quality

STAN PESTOTNIK, M.S., R.Ph., TheraDoc Clinical IT

RICHARD J. KATZ, M.D., George Washington University

MERYL BLOOMROSEN, American Medical Informatics

Association (AMIA)

MEGHAN DIERKS, M.D., Beth Israel Deaconess Medical Center

SESSION 3, PANEL 5:

MEGHAN DIERKS, M.D., Beth Israel Deaconess Medical Center,

Moderator

P. JON WHITE, M.D., Agency for Healthcare Research and Quality STAN PESTOTNIK, M.S., R.Ph., TheraDoc Clinical IT

RICHARD J. KATZ, M.D., George Washington University

SATISH MISRA, M.D., iMedical Apps

MICHAEL M. SEGAL, M.D., Ph.D., SimulConsult

WENDY J. NILSEN, Ph.D., National Institute of Health

MERYL BLOOMROSEN, American Medical Informatics

Association (AMIA)

KRISTEN L. MEIER, Ph.D., FDA

PUBLIC SPEAKERS:

JOVIANNA DiCARLO, International mHealth Standards Consortium

MARK JEFFREY, TeleMedicine and Advanced Technology Research Center (TATRC), USAMRMC

OTHER PARTICIPANTS:

DAVID S. HIRSCHORN, M.D., American College of Radiology

BRADLEY THOMPSON, Epstein, Becker & Green


 

INDEX

PAGE

SESSION 3 - SESSION OVERVIEW - Bakul Patel, CDRH, FDA 248

PRESENTATIONS

Kristen L. Meier, Ph.D. 253

P. Jon White, M.D. 260

Stan Pestotnik, M.S., R.Ph. 268

Richard J. Katz, M.D. 276

Meryl Bloomrosen 283

Meghan Dierks, M.D. 293

PANEL 5 - Categorizing Standalone Clinical Decision 303

Support Software - Meghan Dierks, M.D., Moderator

Q/A Audience Interaction 341

PUBLIC PRESENTATIONS

Mark Jeffrey 358

Jovianna DiCarlo 360

ADJOURNMENT - Bakul Patel, CDRH, FDA 361


 

M E E T I N G

(8:20 a.m.)

MR. PATEL: Good morning. It's 20 after, and I figure we should start now, given the agenda said at 15 after, and that basically puts me down to speaking less in the beginning so that we can have the presenters start.

Good morning and welcome back to Day 2 of the Public Workshop on Mobile Medical Applications Draft Guidance.

Yesterday we had a great discussion. Before I get to all of that, let's do the formalities and get the logistics out of the way so that we can get to the program.

The participants will only have access to this building, that's Building 31. Please do not ask or visitors cannot be escorted to other parts of the building or other buildings in this campus. So don't ask FDA employees in this room to help you do that, and the security doesn't prefer doing this. So please bear with that. If you leave for any reason, outside Building 1, please allow extra time to get back, during breaks or whatever.

So having said that, and getting that out of the way, restrooms are behind the registration. At lunchtime, there will be food. I think the kiosk there will help you with food and snacks through the day. I didn't announce that yesterday. I think people found that easily, that I didn't announce that.

So let's get on with the program. Yesterday we talked about two of the three topics that we pose as part of the guidance important enough and interesting enough that the Panel led two very passionate and interesting discussions. I'll summarize that at the end of the day today, just to sort of take away messages, very high level, and we'll sort of outline the next steps also for how we're going to go about using this information.

Today's topic is equally important in addressing and helping the Agency to clarify its position on CDS. So CDS equals Clinical Decision Support, Clinical Decision Support. It can come in various formats and size and shapes, and you'll see that in the presentations up front by some of the Panelists and some of the presenters who are going to talk about this topic.

Interestingly enough as you can imagine, what do we call decision support? It can range from a very simple medication reminder that's helping people make a decision of taking medication, supporting the decision of taking medication. Two, very complex things like -- let's not even talk about complex. Let's talk about something in between, during an encounter, helping a clinician make certain decisions while they're actually treating a patient, and it could be simple things like drug-drug allergy check, drug interaction, or any other similar things.

Let's just take it to the next level. What about radiation treatment therapy when it's complex algorithm. I was just talking to one of the Panelists, and we're talking about this particular example. What if that complex algorithm take a bunch of inputs and provide you with a blanket decision that you totally rely on. How's that play into this picture?

So as you can tell, with the few examples I just cited, and you'll probably see a lot more as the presentations come by and the Panelists talk about this, and then there's this support that resides in a device, in a medical device, and I'll give you an example. You can think about a cardiac monitor, you can think about a bedside monitor as it sees the waveform come through as an input to that device, you can see how it triggers alarms that the machine itself analyzes the waveform and gives you certain alerts or certain indications, alarms, to tell you what to do next. I mean that's one level.

We're not talking about those particular conditions where they're embedded in a system or embedded in an IVD machine. We're not talking about those.

We're talking about standalone, which means somebody who writes a piece of software, takes a whole bunch of input, either it's physiological input or it's environmental input, and then takes that to the next level -- on that, compares it to some database, and takes it to the next level and provides some level of support, either to a patient, either to a caregiver, either to a clinician in the process.

So as you can imagine, there's a plethora of factors that go into this, into the rest of patient safety at the end of the day which is what we care about a lot. Patient safety at the end of the day needs to be sure that when these decisions are made by the software, by the algorithm, by the implementation, that there's many things to consider. What are those things to consider?

I've asked the Panel to think about that, discuss that and help us sort of identify some factors that will help us sort of differentiate between things that, you know, physicians can accommodate in their practice and is taken care of as an intervention there already, which somebody can tell the difference between good and bad and so on and so forth, whereas to the other extreme there is totally autonomous where a suggestion is given to a patient even though it's not a complete titration, a suggestion is given where it could lead to a situation where we don't want or we could have an adverse situation.

So that's really what we're talking about, standalone software. We're not talking about things inside the box that you are used to seeing.

That's a very high level overview.

So in order to address that, I think we need to understand the landscape, and in the first part of today's session, is going to be talking about and hearing presenters talk about different perspectives or different experiences and different CDS applications they have seen, and a lot of folks in the informatics world have been thinking about this for a long time, a lot of folks who are building medical apps having been thinking about this for a long time, and you'll hear that today as part of the first part of the session.

Following that, following that panel of presentations, the Panel that's going to talk about the factors, we'll try to answer to fundamental question. How should we assess these kinds of software? And what factors are important to assess? What is important to parse out this area so we can differentiate between the high risk, low risk, patient impact in the end is important, and what assurances do we need to have this software be safe and effective at the end of the day? So I think that's the quest here.

I would encourage everybody to continue participation in this discussion. Like yesterday, we'll have -- let me get the Twitter hashtag which somebody will be here at 9:30 to collect that, two ways. We have some FDA colleagues who will walk around and give you cards during the Panel discussion, write your questions on the cards or comments, and they'll bring it to the moderator, and they can incorporate it into the discussion that the Panel is having.

Number two, we have separate time slots allocated for folks to step up to the microphone, and I see the microphone's missing. So we have to get some and get them in their sockets there, and then come up to the microphone in the Q&A session, and feel free to stand up and ask a question to the Panel. Hopefully, that will trigger more discussion, and the intent is to have you guys think about all these different factors that exist today and ranging from the simple to the most complex decision support.

Hash tag for Twitter during the Q&A session will be hashtag, pound sign that is, FDAmHCDS. It should be pretty obvious. It's FDA mHealth CDS. So that's the hashtag. I believe that's been posted, and if not, I'll make sure that's being posted.

Having said that, I will invite Kristen, Dr. Meier from the Center for Devices, to give us a brief overview of what we have been thinking internally at FDA, at the Center, and provide you sort of the initial thought process that we have had in terms of how to parse out this area.

So take it way, Kristen.

DR. MEIER: My slides, are they on here?

MR. PATEL: You need a mouse for that I guess.

DR. MEIER: Thank you, Bakul. This morning, as Bakul mentioned, I'm going to give you a brief overview of how we are thinking about CDS here at FDA. I will caveat that with nothing I say is set in stone. Maybe sand is a better analogy here. We are absolutely still discussing these issues, and that's why we're here today, and we look forward to your input and thoughts and learning from your experiences.

One way to define CDS is very broadly as those systems, softwares, modules, whatever, that use an individual's information from various sources and then converts that information into new information that's intended to support a clinical decision.

By information, I mean a wide range of types of information. It could be lab results, anatomical measurements, demographic data, clinical signs, the list goes on, something that's specific to a patient or an individual, and that data is converted into new information. That new information might be a specific recommendation, a clinical advice, the risk of an outcome. Yesterday I heard actionable information. I like that term.

We believe that CDS is a device according to the Food, Drug and Cosmetic Act, which Bakul mentioned yesterday in his slides as well. The key phrases from that are contrivances that are intended for use in the diagnosis of disease or other conditions . . .

Breaking down the definition a little bit further into the two bullets that I have, first of all, the information. It takes information. Our thinking is that how that information actually gets to the CDS is not so relevant to defining CDS. There's a wide range of information, and it can come from different sources and it can get there in different ways. It could be entered by a person, it could be a physician or an individual person sitting at a laptop, or through a mobile device, entering in clinical signs, patient history, personal data. The information could be transferred electronically from a manually connected device. You might have a device that's monitoring your blood glucose for instance or some other thing, and that data is being recorded. You're not acting on it at the time, but then that data maybe is downloaded or uploaded to some other device, and then that information is converted into some sort of actionable information.

Data could be entered directly through a device that's actually hooked up to the person and acquiring the information, and then that information is electronically converted into some sort of software that's taking that information.

Again, you have to think broadly here. It could also be environmental data that might be important to the health decision. It could be pollen counts, temperature type information, or demographic data.

The other key aspect we believe is that there's some sort of information conversion process going on, and we believe that how that information is converted is important.

Again, I think we have to think broadly here. There's lots of ways information gets converted. It could be through using algorithms that are either fixed or iterative. It might be statistical formulas or models or statistical analysis features, again neural networks, you know, very complex kind of things. It could be simple kinds of calculator type formulas.

It might be a database lookup or a comparison. You might try and match the patient's attributes to those in a large database where you actually have some clinical follow-up data. It might be using rules and associations, like if this result is above six and the answer to this question is no, then we recommend this action. So there's lots of ways that information gets converted.

This slide is a little complex here. I had a little trouble with this slide, in thinking about this, and I think it's because the landscape is complex. There's a really wide range of types of CDS, a very broad range, and there's different kinds of support that CDS provides.

We feel a characteristic of defining this landscape is the clinical impact of the decision that is based on the CDS. If you look in the middle slide here, you see that one component of that is the extent of reliance on the CDS output. Is it just one little piece of information that's part of lots of information that's used in the decision, or is it the primary information that's being used?

I think the extent of reliance on that information probably depends on who the user is and what their perceptions and training is and probably evolves over time.

What's the general acceptance of the CDS methodology that's being used? Is it just putting in software what we're already doing in practice, or is it some real new novel way of combining this information into new information?

How pervasive is the use? Is everybody out there using it, doing it, or is just used by a small handful at one medical center?

What's the complexity of the clinical decision that is trying to be addressed? Is it normally a decision that requires lots of different things for the healthcare provider to consider, or is it a very simple? You know, just check this value, and if this value is bigger than something, then we do this. So what's the complexity of the decision that we're trying to support here?

Some examples here are in two columns. Again, you might view the left-hand column here as less concerning CDS, less concerning from a patient safety perspective perhaps, and more concerning on the right. So again, Bakul mentioned some of these examples. You might have a simple calculator for a standard, well-accepted creatinine clearance. Formulas of BMI, body mass index, might be something that gives a reminder to actually do a test or take a test or get a consult or take a medication. It might be a check for a drug-drug interaction, check for allergies.

On the other extreme, it might direct where you should do a biopsy. It might suggest a cancer treatment that is based on a proprietary algorithm that isn't widely accepted and understood.

As Bakul mentioned, it might provide radiation treatment recommendations that combine lots of different input from the patient, or it could be something that interprets very complex information to an untrained user. That untrained user could still be a healthcare provider who's not normally trained in the particular area.

What you don't see in some of these examples, and in the landscape, too, is that we're not focusing so much on the hardware. The hardware involved here doesn't seem to be, in some of our thinking, the defining feature.

Now, one of the Panel questions is to ask folks what is an appropriate approach for assessing reasonable safety and effectiveness? And I realize we use lots of FDA lingo, jargon around here, and those are terms that are defined in our reg. So I just want to take a minute and just show you how our reg define these terms.

Let me also reiterate, that was mentioned yesterday by Tony and Bryan, that just because something's a device doesn't mean we're going to collect data on it and look at and evaluate it. Even if it is something that we look at doesn't mean that we're always going to do a premarket submission on that.

So bearing that in mind, if we decide that you need a premarket assessment for your device, we do look at safety and effectiveness where safety is defined as whether there's reasonable assurance that the probable benefits outweigh any probable risks. That's directly from our Code of Federal Regulations.

Effectiveness means that there's reasonable assurance that the use of the device will provide clinically significant results.

Also a lot of discussion yesterday on intended use. These safety and effectiveness are evaluated in the context of the intended use of a device. I did want to add one reference for folks since there was a lot of discussion on this yesterday, for what FDA might deem as the highest risk kind of devices, we did publish a month ago a guidance document called "Design Considerations for Pivotal Clinical Investigations for Medical Devices." It was issued on August 15th, and in that guidance there is a discussion about what I call diagnostic devices or devices that assess a patient. There's a whole section on these devices in that guidance, and in there, there's a subsection in Section 8 that talks about intended use and how that impacts how we evaluate safety and effectiveness. Again, a lot of CDS devices would not follow under these guidelines and require this kind of support, but for the high risk ones, I would encourage you to absolutely go look at that guidance which is still open for comments.

Again, we are especially concerned at FDA about the higher risk CDS, and again we're hoping to get input from the Panel on what some of those factors are that we should think about, for thinking about how to evaluate the risk or safety impact that these devices might have on the patient.

And some of the factors we came up with are listed here. We thought that the level of impact on the subject's health condition or disease was an important factor to consider. Again, what's the seriousness of the harm that could happen from a wrong decision? And what's the chance of harm as well?

What's the degree of acceptance of the CDS in clinical practice? Is it just standard practice that we're just trying to write now in software, or is it really some novel approach to how to make decisions for patients?

How easy is it to identify erroneous output due to different kinds of errors? So erroneous CDS output. So these errors might be due to procedural errors because somebody is typing in the wrong information or there's a data transmission error or a software bug. Yesterday we talked about still, no matter how good you try, there are still bugs in software.

Another type of error could occur because the device is basically providing, the CDS is providing the wrong information, and by that I mean perhaps that the underlying model that's being used is just not the right model or it wasn't validated well, or the database, the knowledge base that was used to develop that model basically isn't applicable to the kind of patients that you might be using this in.

So there's lots of different reasons that that information might be wrong. So, again, it's not because of a procedural error but just the recommendation just isn't the one that should be made.

So, again, this is just our thinking and again not set in stone. We look forward to lots of your experience and thoughts on these issues as well. Thank you.

(Applause.)

DR. WHITE: Well, good morning. My name is Jon White. I work at the Agency for Healthcare Research and Quality, and my day job is that I direct the health IT portfolio at the Agency. So we're going to have all sorts of interesting in-depth technical discussion of clinical decision support and regulations and stuff like that.

I'm looking ahead at a couple hours of that. I couldn't stand it, so I figured I'd open up with bad Mark Twain impression since I like Mark Twain a lot, and I think there's a lot of wisdom to be gained from thinking like Mark Twain sometimes, and the impression goes something like this.

Many educated people are familiar with the report from the Institute of Medicine that brought focus to issues with the quality of care in the United States and the name of that report, of course, is "To Err is Human." A few people, however, recall the subtitle of the report, which is to really screw things up, you need a computer. (Laughter.)

So the question that we have in front of us today, that hopefully we're going to have a really good discussion about, is to what degree do we need to be concerned about and therefore do we need to think about applying regulation to clinical decision support?

Okay. So my background before I worked at AHRQ is that I'm a family physician, and those of us who are doctors of a certain age, when I say to you, do you remember your little black book? You go, oh, yeah. Because most of us toted around a little book, right, and in this little book we kept notes about the things that we had seen, interesting pearls of wisdom that had dripped from the mouths of our attendings or that we had gained through, you know, hard work and insight with some of our cases, stuff like that.

In residency where I trained, we had something that was called the compendium. Now, the longer title is the compendium of hard knocks, and it was a compilation that had accumulated over the years of all the stuff that we knew that we were going to need whether as interns or as, you know, upper level attendings, for the things that were kind of come across our plates, and that was a really, really useful thing, and the reason it was a useful thing is because we are all human, and therefore, you know, even though we're highly trained and highly capable people, we have our limitations, and whether they're limitations of time, okay, you can't possibly read all the different articles that get, you know, posted to PubMed every year and therefore have a comprehensive view of the medical literature or whether they're limitations of energy, you know. It's 3:00 a.m. and you're trying to, you know, make a critical decision about what med to start on somebody who is in the emergency room or what have you.

We have recognized for, you know, a long time that we can't do it by ourselves, okay.

So, you know, we used to use these things called books back in my day, and it was funny. As Kristen was talking about the definitions of things that the FDA should regulate, I started thinking about the book that was sitting in my lap, and I'm like, well, isn't that a contrivance? You know, isn't that something that I use to kind of have things bumbled together and have with me and readily at hand so that when I can't quite remember, I can flip it open real quick? Yes. You get the idea.

We start to say, well, how do we define these things, and that's what the discussion is to be about because now we've taken that leave, right.

There's a physical art to medicine, okay, and we use devices for that, whether it's, you know, the piece of catgut that's attached to a little arc of surgical steel that I drive through a patient's tissues to bring them together and hold them together or, you know, something really complicated like the modern pacemaker.

In the same way, we could potentially think about decision support software as devices that help us extend the mental part of medicine and start to bring the right information to bear at the right time in the right way.

So I mentioned at the outset that I'm from AHRQ. For those of you who don't know, AHRQ, it's part of the family of Health and Human Services agencies, and we're fondly known colloquially as the evidence agency because we fund health services research about how things like health IT can improve the quality of care.

So I think I can say fairly unequivocally, okay, that a review of the literature and the evidence shows that clinical decision support can improve quality, okay. There's been various meta-analysis over time, and there's one that's pending right now. It's not quite been published because there's some publications waiting at some journals, but in the very near future, it's going to show that if you look across the literature, the use of clinical decision support software as we've classically defined it, okay, improves at least process outcomes, okay, which is if you have a certain condition and then you're supposed to get a certain test, does that improve the rate at which you get that test, and the answer is yes in those cases. It also shows that a review of the literature shows that when you use it properly, you're more likely to be successful.

So there's just not questions of capabilities of these software devices to improve quality, but there's also questions of how you use it.

So then if you also look at the literature, it's equally obvious that there are plenty of ways in which, you know, you don't use this stuff correctly, and therefore you don't get the desired impact, okay.

Now a key point of the debate, okay, for me, is are we causing harm by using those devices not in the way that they're intended or not by people that are intended or are we simply less effective. Okay. One is a much more rigorous standard on one which does not, you know, brook inaction, okay, which is when we're doing harm. It's another thing to say that it's an imperfect system now. We don't do a great job, and it's not as effective as it could be, but it kind of improves a little bit, okay, and that's a different set of standards which I hope that we'll have a chance to discuss a little more in depth as we go by.

So the final thing that I'm going to leave you with -- no, I'm sorry. Two more things.

So we look at the evidence on health IT. It's clear that clinical decision support can improve quality, okay, and that should color how we think about regulation of this because you don't necessarily want to dampen that effect in a system that has issues with quality and they've been pointed out, you know, in a lot of different places. You don't want to stifle the ability of the system to improve and innovate.

Something that I did not hear a lot of in Kristen's discussion and, you know, there's plenty of opportunity to discuss it, is this question of the learned intermediary, okay, and the idea being that where on this spectrum, and she correctly characterized it as a spectrum, where on the spectrum of decision support aids do we fall with an algorithm, okay, making a decision for us, for Dr. White, you know, something that I can't possibly calculate in my head and how do I trust that that output is correct, okay, for example, radiation dosing versus something for which it saves me time, okay. It saves me having to run over to the library and look something up for 45 minutes while the patient's sitting in the room waiting for me and come back and say, ah, I knew I remembered reading that article last year, and here's what it says, and so here's what we're going to do.

There's a spectrum of those things. So where in that spectrum do we start thinking that it's important to be able to make those decisions?

Who is important to put that burden of regulation of use on? Okay. Is it important that I as the licensed practitioner being held to the standards of either my profession or my state or what have you satisfies that burden, and I'm making judgments about when to use it or when not to use it or when I'm out of my depth, versus when is it important to put that burden on the folks who are putting together the software which historically it's not been there. The burden's been on more the intermediary.

The final thing I want to point out to you, after the intermediary, is this question of, you know, what does the evidence show? Kristen again did a very nice job of laying out, you know, what appears to be, you know, data processing and algorithms, queries out on the other end. When you get down into the evidence, okay, and again being from AHRQ, I know this reasonably well, when you get down into the medical evidence, it is not infrequent that you find non-hard edge statements, right, and again, those of us who are clinicians know this. When you go to the guideline of your professional association and you say I'm looking for a recommendation to either do A or B, frequently you run into words like should consider, you know, might think about, right, and maybe not think about, but should consider is a term that frequently gets used. And then you say, what do I do with that? That's not a binary yes or no decision. Do I consider it, and then ultimately I've got to make a judgment about whether or not to do that or not.

So the point of that being that when you get down into the medical evidence, there are certain things for which there are hard facts, okay, and you should, yes, do this or, no, you should never do that. There's a lot of gray, okay, and a lot of the decision support software that's out there tries to help us interpret that gray, right. And so, you know, then it becomes a real question of, well, where do we start worrying about how the software is presenting things to us or not?

I hope that we can have a good discussion about not just the clinical content of these things and who is using them, but how usable they are, okay. The world has rapidly changed from the days of Windows 3.1. We've got a lot of very cool secondary interfaces, but by the same token, you know, if we don't think carefully about not just who's using these things but where they're using them, when they're using them, the form that they're being presented, we're doing a disservice to everybody who receives healthcare in the country. So I'm hoping that we can have a good discussion about that. So lots of good things teed up for us.

I'm going to close with a quick comment from one of my favorite books. Eta Berner is a professor at the University of Alabama and a person with whom I've worked a lot, wrote a nice book on the subject, and preparing for our discussion today, I was rereading some of the stuff, and this struck me. So I thought it would be worth our consideration.

"The debate over medical software regulation represents one of the most important controversies of the computer age. The balancing of risks and benefits as well as public safety and technological progress means that scientists, clinicians, and policy makers have one of civilization's most interesting and challenging tasks."

I like that. It felt like it suited the scope of the moment. So thank you very much for your time, and I look forward to a discussion of one of civilization's most interesting and challenging tasks.

(Applause.)

DR. PESTOTNIK: Well, thank you, Bakul, and the Agency for inviting me here today. My name is Stan Pestotnik. I'm the founder of TheraDoc, which is a manufacturer and vendor of standalone clinical decision support technologies. Prior to TheraDoc, I spent 23 years of my career at LDS Hospital in Salt Lake City, much of the work funded by AHRQ or its predecessor agencies on the development of clinical decision support technologies, and we were fortunate that we published almost 75 or over 75 papers on the benefits of those systems as well as some of the challenges.

What I want to do today is I want to just briefly introduce you to the technology and then wrap up my comments with the challenges that we face as a vendor in installing and implementing these technologies in diverse hospitals, clinics, and long-term care facilities.

When I founded TheraDoc in 1999, I wanted to create a decision support platform that was interoperable at three distinct levels, (1) at the data level, (2) at the knowledge level, and (3) at the workflow level. The idea here is that as I've observed over my 30 years in informatics and healthcare, it seems the repeating theme that we're faced with is that there's a paradigm of isolation and fragmentation, whether that isolation and fragmentation be due to data or the knowledge, and it seems that if we think about technology such as clinical decision support, that we can begin to break down those barriers.

The platform is very, what I would say, manufacturer or vendor agnostic with respect to the sources of where the data comes from. So we're not really troubled by whether these are Cerner systems, Eclipses, or EPIC, but the platform is very data hungry.

For technical feasibility, we define this as a minimum data feed of the ADT, the patient registration, laboratory with microbiology and pharmacy data. However, in practice, we receive many more data feeds from these source systems than that. In addition to those, we're receiving surgery interface data, vital signs data, clinical documentation, and radiology.

In addition to the technology, we think a lot about the knowledge bases that are encoded in our software. So we spend a lot of time thinking about and using a structured process for what we call knowledge engineering. The idea or the goal here is to be able to reflect in the knowledge bases of the domains that are addressed the current literature, the best practices, and the evidence-based guidelines that are available to us.

We also subscribe to the idea that the knowledge bases should be very transparent to the users at the time of use as well as at the time of implementation and that all references should be traceable and audible within the software.

The knowledge bases undergo a rigorous regression testing before release and have complete audit trails.

The idea here is to be able to provide a technology that will provide realtime clinical surveillance that's coupled with clinical decision support. The idea here is to be able to focus clinical attention vis-à-vis reminders and alerts and then provide guidance with regards to interventions, documentation, and then wrap that up with the ability to track outcomes.

The current domains that are addressed by the TheraDoc technology are in the areas of infection prevention and surveillance and a microbial stewardship and the management and detection of adverse drug events.

What I want to do now is just briefly show you a couple of the screens from the software to illustrate what happens with the software. This happens to be a screen of our antimicrobial stewardship application that is actually recommending to a physician or another clinician the treatment of a hospital acquired pneumonia, and it's recommending a drug dosing for this particular patient's underlying physiology, giving some information about how long one should think about using it and, of course, in today's environment, how much is that going to cost on a daily basis.

You'll note that the references are all there. They're transparent to the individual. They are hypertext links. So in the context here, if I wanted to, I could click on that, and that would go out and read that particular article about this subject matter, and it's also important to note that this is what we call open looped clinical decision support in that it is the ultimate choice of the practitioner whether to order this therapy or not.

You'll also note that there are 13 alternative therapies that the literature suggests that would be appropriate to manage this particular infectious process. It just happens that the one the software has recommended is the most cost-effective for this particular clinical case.

Recommendations are not always drug-related. In this case, the most appropriate course of action would be to remove the Foley catheter before one thinks about administering drugs. You'll note that the patient's summary data, that's important because in every one of our recommendations, we're not only tagging the literature that goes along with it, but also the supporting clinical data at that time, and that ends up in an audit trail.

And sometimes the appropriate recommendation is to bring in a specialist into the care rather than recommend a drug or something else. We call this software gracefully degrading when the complexity of the clinical condition is such.

You can also use the software to help you look at and manage different adverse drug events. This happens to be an alert of a patient who may have drug-induced thrombocytopenia. What we're displaying here for the clinician reviewer here is the actual platelet count at this time. There's a lot of trending that goes on behind this, and we show the current medications that the patient is receiving or has received during the encounter that the literature suggests may be related to drug-induced thrombocytopenia, and that it's up to the clinical user to then take this information and work up the case. We provide documentation and guidance for causality and reporting to MedWatch and a variety of other reporting structures.

But the idea here is that the software is able to do the screening of a lot of this data and then focus my attention on this patient, searching for needles in haystacks if you will.

With respect to infection control and inspection prevention, just a sampling of what can be done. Here the software's identifying a patient who has an infectious process for which isolation is required, and conversely when the infectious process clears, one can send out an alert or a reminder to discontinue isolation. Bates et al. has shown that people who remain in isolation are twice as likely to have an adverse event due to that isolation. So it's important to get them out when appropriate.

Okay. With that little bit of a background, I want to wrap and talk about the implementation challenges. I told you a little bit about the structured knowledge engineering process that we utilize. I talked about the data. What's interesting to us is that these challenges are not unique to any one hospital market segment. We find this across the spectrum of our customers, whether these be large academic medical centers or very rural hospitals.

The first implementation challenge is at the knowledge level. I described how we've gone through this rigorous knowledge engineering process, running regression tests against the knowledge so that at release, we're very confident that the software's going to do from a knowledge perspective what it's intended to do, and then we get to the implementation and the installation phase, and people want to customize knowledge based on local practices.

Now, what I will tell you is that often, and now almost 12 years in doing this in a commercial venture, that is driven more by business processes and business rules than clinical processes.

So what it requires us as a vendor to do is now if we do customize the knowledge, and there's sometimes we refuse to customize the knowledge, we'll have to run regression testing against that new knowledge base, and it just prolongs implementation and you end up getting beat up about that.

More important and more often encountered, and what we feel to be more onerous for us and can create a safety problem is this whole idea around data, and there are three things that we are constantly faced with when we are installing these technologies in hospital environments or clinics or long term care, and the first one is filtering.

What we find is because there's limited resources in IT departments, that they will chose to filter data that they're sending to us. It tends to be quite acute with respect to laboratory data and microbiology data. We, as I said, want as much of the data as is available to go out through an interface, but again, because the hospitals are resource constrained, they do not want to validate and test the data strings. So they will arbitrarily filter the data.

Fortunately, we have a very good implementation team with automated tools that can alert us of the filtering, and then it gets into a very interesting conversation with the client as to why we should or not filter and, in fact, we've actually had to solicit the input of CAP, the College of the American Pathologists, to help us convince the client that filtering is not a good idea nor a safe idea.

The other is suppression of results, again driven by business processes rather than clinical evidence. The idea here is that with microbiology data, we often see hospital suppressing susceptibility results of the microbiology data because they want to control the prescribing of expensive agents or they want to preserve a particular anti-infective and hopefully in their minds reduce impacts of selective pressures on antimicrobial resistance.

And, finally, it's the fidelity of the source data that's very interesting even in the dawn of the 21st century. We're still seeing hospitals who will take data that is very discrete, coming out of the source systems, and turn it into non-discrete data so as to format it for a nicely viewable, printed report.

So this creates a challenge for a vendor such as ours when we are, as I said, very interested in getting as much of the data in a very pristine fashion, spending a lot of time performing semantic interoperability with the data through out vocabulary engines, making sure that it's mapped to all of the standardized codes so that it can execute appropriately within the knowledge bases.

I guess in summary what I would say is that my two cents on the subject is as the Agency looks to provide guidance on the devices themselves, I would encourage also looking at and providing guidance on the implementation and the consequences of implementation. I know Dean Sittig and David Classen have written a lot about the safety of implementation. I'm here to tell you that it's real, and it happens out there in the real world.

So with that, I thank you.

(Applause.)

DR. KATZ: Thank you, Bakul. I'm Dr. Richard Katz. I'm a clinician. I'm the Chief of Cardiology down at GW and do not come from the informatics world. And so it's been an interesting sort of transition for me as I was required to use the electronic medical record a few years ago, otherwise our bonuses were turned off, and it continues to be that way as we try to maintain meaningful use. So I am now at 99 percent electronic prescriptions, et cetera.

And so where I come from, in the couple of years that I've been sort of trying to take advantage of this process, is more from the patient interaction and for the clinician interaction with some of these clinical decision supports, less so from sort of the other end of the spectrum and the very valuable one for diagnostic and decision-making on the higher level of the practitioner.

And so, for example, I've done some work with medication adherence with a reminder software, sort of your more simple alarm clock kind of thing. Sometimes one way. More complex with some chronic disease management particularly with diabetes, with the WellDoc system as well as another PHR type of system where there's a lot more suggestion and interactivity, in addition to that the EHR kind of experience that we've had.

As a clinician, in thinking about, you know, what the risk, the benefits, and where we need regulation and what I'm using already without necessarily there being an electronic or mobile health sort of thing, I'm trying to think of sort of the key elements that we have to consider such as who's going to be using the software or how many people and how standalone is it really. Is it just the patient? Is it the case manager who may not be sophisticated or varying levels of sophistication or a healthcare provider, or are they all combined together? And so it may or may not be interfaced into an EMR, and so your data sources may be somewhat standalone. That system, though, may have multiple constituencies that are cross-talking with each other.

And so the People Phone reminder was just patient. The WellDoc management system that I happen to have had some experience with is more of a community where you've got the patient, the case manager, and the healthcare provider all sort of using the system and depending on it for enhancement of care.

And then what's the purpose and the intended use of that software and how complex is it going to be? And you've heard, and so I'm trying not to be too redundant with things here, is it just a tracker? Is it going to track the status? Obviously not only your blood pressure, your heart rate, your weights, your blood sugar, but it's also going to track the status of your care. Are you sort of achieving the standards of care, and let's say HEDIS criteria for what you need to achieve that year for diabetes? And is that being brought to the attention of both the patient or the care provider system to really sort of keep track of things as sort of standard care better reminders? And certainly we may get graded on that as physicians in the healthcare system as we are reimbursed or paid for really performance rather than just ordering of things in a more kind of random way.

So in that purpose, also there's advice and there's always advice based on data, and you just heard about what's the sort of liability input and how is it going to be used? Are we going to use it just educationally? And so it may not be a real decision maker, but it may be trying to sort of increase your awareness as to what decisions might be made.

Is it prescriptive or is it non-prescriptive? I think you may have talked about that to some great extent, but obviously when it is prescriptive, if we're making calculations for doses of medications, for example, then there obviously is going to be some significant regulation oversight that needs to be really quality controlled for that.

When it becomes non-prescriptive, so it's suggesting rather than directing the patient or their caregivers to take actions, whether it's alerts, whether it's, you know, in a risky category, high, low or medium risk, whether it's a coaching thing for behavior, the non-prescriptive stuff, which is sort of like what I give out in office material, or people get online, often as they constantly say, well, on the Internet I saw this. It's not going to be quite nearly in that risk kind of category.

But there's also the validity of the quality of things. You've heard how rigorous the previous system is in trying to show that they are looking at evidence based or not in referencing what they have. So with my experience with the adherence, just with the so-called pill form, they used a pill book which was a book, and they just incorporated it into the data and they took something, and hopefully that is as accurate as possible and needs to be sort of revised from time to time, whereas in a diabetes management system, there is a lot of complexity of standards of care, but that is based on what would be considered guidelines and practice that is approved by the consensus of a diabetes community of professionals. But when it gets prescriptive, then it's obviously more complicated.

As a cardiologist, I can relate to, I'm often trying to figure out what to do when I send a patient home with heart failure. I'm going to prescribe for them or give them advice, and I scribble this down or ask my house staff to scribble it down if they have decent handwriting, as to how much diuretic to take based on their weight, but that's not in isolation. There's a human factor where they're connected to a nurse to advise them or the physician to say, hey, are you and to check back and forth. It is prescriptive in a way, but it's sort of these are my instructions as you go home based on a physician-based verified recommendation, and that is something that's not standalone in that sense that it is set out there and that you just plug in. In this case, it's going to be verified and approved by the healthcare provider, the heart failure nurse practitioner, or the cardiologist in that case.

And then the third part is where, you know, what environment the patient's going to be or the physician's going to be using. Is it going to be used in the home or when they're sort of out and about and such, and how standalone is it and how interconnected is it potentially with the rest of the medical system? And that's sort of -- again, how standalone is it as to what level of risk there is, what kind of verification or feedback's going to be looking into this care system rather than just isolation? So I don't think a lot of this stuff shouldn't necessarily be standalone without some really very careful verification of it.

The People Phone, for example, is pretty much standalone. The patients set up their meds or with some guidance, and then they've got their alarm clock back and forth, back and forth, take your medications. The diabetes manager is observed, and there is feedback to a case manager who's monitoring on a regular basis what's happening with these patients.

So to just sort of summarize a little bit, you've got risks and you've got benefits, and that's sort of where we want to look at, sort of the needs or opportunities of where we want to go or where we don't want to go.

Obviously we talked about there could be some software errors where you're telling a person, along with education, misleading to risk level or medications, and I think the systems have to be audited to check how many errors there may or may not be, whether software errors or such.

And there has to be some preliminary testing with patients, with case scenarios to allow them to know that the system is actually -- has some accuracy and there's not going to be some gaps and it may be revisited.

There has to be, in these systems where you're making measurements, let's say putting in your blood pressure, that there may be some -- you need to sort of have some rechecks because you can't live and die by just one number. You have to, you know, you certainly don't want to die, you want to rely just on one number. So if you tell a patient to take a blood pressure and tell them to take two measures or two blood sugars, and they're way out, the two of them just don't match, you've got 120/80 and 180/100, 2 minutes to 5 minutes apart, the system's got to be designed to say are you so sure about that? Did you do a data entry? Did you do a measurement entry? Please recheck that because something is off there. So there has to be some internal validation and sort of looking for what may be some outliers in your system.

I think that another potential risk is whether the information is evidence-based in standard of care and how it is referenced, and where you got that, and if there's going to be alerts to the recipient of that information, whether or not it is and what the basis is for these recommendations. And there has to be assurances of proper communications, if it's going to be communicating with healthcare providers, case managers, that it's good.

There are plenty of benefits. I think there are a lot of them. We know rating patient information, really taking our care to the next level, because we're organizing it better, managing it better, we're taking this technology and allowing physicians and healthcare providers to practice better as long as there are appropriate lines of responsibility defined.

We like this realtime feedback. It really is adding more data so we can actually be, as well as looking at trends, with having more accurate data rather than my patient who said, oh, yes, I did measure my blood pressure but I left that list at home kind of thing, just as with my list of medications.

A lot of these things in medical, in mHealth and the stuff I've started with, are more on the back to basics of just how we should be taking care of patients and all these little gaps. We have so little time spending with patients that we are sort of filling in those gaps on standard of care rather than sort of going into new recommendations. And those where we're sort of not changing care, but we're organizing it and emphasizing good care, are probably ones which have reasonably high impact but low risk, and thus the need for regulation is going to be really not have to spend a lot of time with that. Obviously when you get into prescriptive stuff, that becomes a much higher risk and we have to consider that.

The electronic amendment health kind of situation that we're running into, we really need to define where our categories of our information is. Is it no different than what I'm giving to a patient in the office where they're going to fill out a card and keep their blood sugars or their blood pressures or their weights and there's a little information sheet which I pull off the wall which I've got from the American Heart Association to give them or not, or is it then becoming that much more prescriptive, and that's where the challenge is as well. But hopefully some of these little sort of things can help us sort of be more rational in how much regulation we really need over time. Thank you.

(Applause.)

MS. BLOOMROSEN: Good morning. My name is Meryl Bloomrosen, and I'm the Vice President for Public Policy and Government Relations with AMIA, the American Medical Informatics Association, and I was thinking I was going last, and I was going to be clean up batter. Is that the acronym there?

Anyway, for those of you who do not know AMIA, we are a professional association. We're down the road a bit or to the west in Bethesda, Maryland, and we are the professional home for biomedical and health informatics, and we're dedicated to the development and application of informatics in support of patient care, public health, teaching, research, administration, and related policy. We have about 4,000 multidisciplinary members who are seeking to advance the use of health information, technology, and informatics, and I am very pleased to have the opportunity to be here today and to represent AMIA and to offer our thoughts on what we believe is a very complex, critically important set of topics.

My presentation this morning is confined to the questions posed to us by the FDA and relate specifically to clinical decision support, although we recognize that the topics that were being discussed over the two day period and certainly the invitation by the FDA for the public to comment on the draft guidance are broader, but today we're very happy to offer the expertise of our members and subject matter experts in terms of some of the ideas and topics and thoughts we have about clinical decision support.

After the presentation or subsequent to today's meeting, I'll forward to the FDA a written copy of our formal presentation as well as a list of references and resources that we've compiled that we hope you'll find helpful in informing the ongoing discussions.

I'd like to just briefly take a moment to acknowledge and thank several AMIA members and leaders for contributing to this presentation. They include David Bates, Robert Greenes, Rainu Kaushal, Gil Kuperman, Nancy Lorenzi, Blackford Middleton, Jerry Osheroff, Dean Sittig, and Ted Shortliffe, many of whom several of you probably recognize as names of people who have been very involved in building the evidence base and research surrounding clinical decision support in particular.

You've asked us to respond to a very specific set of questions which I will do, but in addition, I'd like to focus a little bit of my time this morning on key themes and some cautionary remarks.

We believe that defining clinical decision support is absolutely essential to moving the discussions forward. Earlier there was a suggested definition of clinical decision support presented by the FDA, and we believe that additional consideration needs to be paid attention to the evolving nature of clinical decision support, particularly as there's a growing array of mobile health devices, technology, and software applications that are being used to offer up or deliver clinical decision support.

There are lots of approaches that exist. There are lots of definitions and terms surrounding terminology related to clinical decision support, also to other terms that you've asked for in your guidance and used in your guidance, standalone systems, devices, mobile apps, and we actually think or offer up for your consideration that this variation in definition and use of the terminology might reflect a lack of agreement in the industry, across different stakeholders. It might reflect alternative interpretations, and it actually could also reflect the evolution of the very topics and their scope which is changing, and we would hope that the FDA will seek broad public and private sector organizational input to the definitional issues related to clinical decision support.

In the written comments, we offer several versions of definitions, but in general, going back to an AMIA released roadmap to clinical decision support that was released in 2006, in part based on the funding and support of the AHRQ and the Office of the National Coordinator, we said at that time that clinical decision support "encompasses a variety of approaches to provide clinicians, staff, patients, and other individuals with timely, relevant information that can improve decision-making, prevent errors, and enhance health and healthcare."

And obviously not to be forgotten is that there are a number of tools and interventions that are included in this clinical decision support bucket, and we think further delineation of what's in the bucket that's currently broadly labeled clinical decision support is very essential. Some of that has already been mentioned by the previous speakers. They could range from computerized alerts and reminders, automation of clinical guidelines, order sets, patient data reports, dashboards, diagnostic support, therapeutic advice, et cetera.

As a result, we would offer for your consideration that it is important to distinguish, and given FDA's interest in potentially regulating and overseeing clinical decision support, it is essential to distinguish between "generic decision support," such as Jon's alluded to this, information resources such as articles in MEDLINE and books or anecdote details in a poison control system, from patient-specific or proactive decision support which would be specific to a patient or populations of patients. Patient-specific clinical decision support may be simple, and here we're using the word simple a little bit differently than was used earlier. So even the contrast between simple and complex needs further clarification.

A patient-specific clinical decision support can be simple, such as based on a single rule that fires when a specific set of lab criteria occurs for a specific patient, or more complex, such as tools that assist with cancer chemotherapy or radiation therapy to assist with general medical diagnoses.

Another cautionary remark. We believe there is an ongoing need to coordinate and harmonize existing efforts across the Federal Government and between and within the research and practice communities in the public and private sectors on clinical decision support and mHealth.

There is an evidence base in clinical decision support that suggests that detailed clinical patient data are often necessary for CDS to be most effective. Some of the previous speakers have mentioned this.

Again, we'd like to emphasize something else that's been previously mentioned actually. The knowledge bases upon which the clinical decision support applications are formed must be based upon best clinical practices. They must be kept current. They must be organized and used in a way that provides coherent and sensible clinical decision support. Explanations of the sources for the clinical decision support advisories or guidance and ideally linkages to the primary source data or evidence should be available to any end user in every case.

So we believe that the FDA should consider the fact that not just in the sense of standalone mobile medical applications but in general, that there needs to be a generalizable framework to help ensure the safety and effectiveness of HIT systems in general and applications in general, including clinical decision support.

We caution the FDA in focusing too narrowly on clinical decision support or in considering mobile apps "in isolation from other clinical decision support delivery methods or contexts." We think that clinical decision support are likely elements in all clinical systems, whether implemented on mobile platforms or on "tethered workstations."

As has been previously noted, there is some I think thematic overlaps here. The safe and effective use of clinical decision support, we believe, is partially dependent on the quality of the associated organization and the HIT environment in which it's implemented, regardless of whether the clinical decision support is running on a mobile device or is on a mainframe setting or in the cloud.

There is a need for development and dissemination of best practices for HIT design and implementation, usability, and all those that have an effect on the quality of how the CDS is implemented. We believe that efforts are still needed to synthesize the results of existing and future research that relate to the capture, compilation, and dissemination of best practices and guidelines for designing and implementing these systems.

With specific reference to clinical decision support, we think there is a need for methods to identify best practices on a national level with a public/private sector initiative.

An additional comment. We note that there is a rapidly emerging and convergence of technologies, devices, and applications. There is also a continuum of new and evolving forms of patient care delivery and payment methods, medical homes, accountable care organizations. We note that the increasing achievement of personalized medicine and the growing pressures for consumer engagement in healthcare decisions, their own healthcare decisions, our own healthcare decisions, so we anticipate that there's going to be a further blurring of the lines of distinction between what we see as information delivery channels and mechanisms, mobile, standalone, versus other, the devices, the applications, and those that are intended primarily for use by clinicians and other providers and contrasted with those that are intended for use by patients, consumers, and their caregivers.

We also think the FDA needs to look closely at things such as the CMS EHR incentive programs that are providing financial incentives for the "meaningful use of certified EHR technology." Those meaningful use objectives include CDS-related meaningful use criteria, and those should not be left out of the discussions.

We acknowledge that there is a great interest broadly in what kind of regulatory or oversight interventions might be warranted in order to help assure the safe implementation of clinical decision support applications in healthcare, and again we caution that that should not be necessarily only focused on those that are delivered via mobile devices but that, in fact, other means, mechanisms, and HIT technology needs to be considered.

With respect to the specific questions that you posed, and in light of my time constraints, you did ask for a definition of standalone CDS software. We suggest that there are several main groupings of CDS software, although we don't think they're specific to standalone CDS.

We also believe there are multiple ways to categorize the purposes of CDS software, again not specific to standalone CDS software. For example, there's CDS targeted at providers that supplies decision support for individual patient care decisions. There could be CDS targeted at providers and supplying decision support for populations of patients. There is also CDS targeted at patients and consumers and supplying information to help the individual in lay terms.

We suggest that standalone as a term in and of itself might be elusive and poorly defined because to some extent the devices themselves obtain information from a wide variety of sources, including sensors, user input, access to remote EHR data and other sources that are not necessarily operating in a standalone mode. So we believe there is a spectrum of standaloneness.

If standalone is meant to refer to operation without human intervention, and I think several of the other speakers have mentioned this, such as in a closed loop, like a pacemaker or an embedded insulin pump, then this term would encompass a very limited number of devices, most of which we don't think would currently function on generic mobile devices.

With respect to your question about level of support and what levels of support do these CDS software provide, we would ask that this be further clarified. We're not entirely sure what level of support refers to. It could mean the depth and breadth of coverage and context in a patient-specific way, or it could mean the depth and breadth of coverage within a given clinical domain.

For example, using a computerized patient order entry applications as an example, there could be a range from basic to advanced, and they may or may not be accessible on mobile devices. There could be basic classes or subsets of drugs. There could be intermediate, broader agents of drugs with more complete modeling of drug knowledge, and there could be advanced capabilities such as a deep drug knowledge base with modeling of drug classes and other attributes.

On the other hand, perhaps you mean, by level of support, features, functions, and purposes of clinical decision support, and we think here, too, these might be better described along a continuum of attributes. Some researchers have recommended a multistep model that provides a framework for clinical decision support related standards. Others have identified specific CDS intervention types. Still others depict CDS across different continuums, answering questions in response to queries, simply retrieving data, providing information to make diagnoses, et cetera.

So what we believe strongly is that, to summarize, is that this is a complex topic. There's been a lot of research and evidence base specific to clinical decision support, although not specific to mobile applications, and we are very happy to contribute the knowledge of our members and the subject matter experts that have worked for many, many years, and we would hope that the FDA adapts the information and the evidence base related to clinical decision support that has been undertaken in the context of health information technology systems and bring that to bear under conversations related to mobile apps. Thank you.

(Applause.)

DR. DIERKS: Thank you, Bakul. Hopefully I'll use this correctly.

So, you know, it's interesting. I have the enviable position of being last, and what I think I need to do is go a little bit beyond what the original two or framing questions were, which as I understood it were to, based on our experience, and I happen to also -- I'm both an informaticist as well as a clinician, to kind of define based on our experience what the landscape of clinical decision support is and help inform you as you kind of develop a strategy, and I thought, well, gosh, let me see what I can do to kind of help us also think forward because I think if we actually develop a strategy based on what we currently use and what's out there today and how we understand decision support, we will be overly constraining because within a very short time, I think we're going to see just an extraordinary explosion in terms of the diversity and nature of what decision support can and will do.

And so I'm going to try to do it, instead of from a top down saying descriptive, what is here and what are the attributes, instead kind of go from a bottom up approach, and hopefully I'll be effective.

I've failed already with the technology.

Okay. So I'm going to say that clinical decision support, and again I'm going to bring my context as a clinician in this presentation. Clinical decision support is anything, and I mean anything that has the potential to influence any or all of the typical decisions that I make when I care for a patient.

So I think a good starting point is what are the types of decisions or the cognitive tasks that I, as a clinician, typically make? And then, what strategies do I use to move through those?

So I'm just going to step you through about six or seven of the specific things that I do in, day in and day out, when I take care of patients.

The first thing that I do is I actually try and detect what the patient's current state is. What's the patient's current state? And that's usually based on a limited set of data.

The second important task that I do day in and day out is try and make a determination about whether there's been a change in the patient's state. And if I try to detect a change, I want to know by how much, when did it change, and how quickly did I detect the change?

The next important task that I do or decision that I make is trying to predict. Given the current state, what do I believe will be the patient's future state or states? And what will they be based on the natural course of events? What will they be after I apply specific intervention, or what will they be after I withdraw an intervention?

The next important decision that I make is trying to identify what the goals or objectives are for my care of the patient. So given that I know what the patient's current state is, where do I want them to be, and if I don't want them to be in a certain place, what do I need to do to actually achieve that goal or maintain the current state?

The next decision is I need to identify what my options are. What are the options that are available to me? So given where the patient is, and where I want them to be in a certain period of time, what are the possible interventions or responses to achieve that goal?

My next task is to choose among several options. So if there's more than one possible intervention to try to achieve that goal, how do I rank order them and by what criteria do I make that choice? It's never a simple, single criterion. It often is a balancing or tradeoff between multiple criteria, and it really is a very difficult task to try to optimize amongst one or many of these criteria, but that's one of the things I'm doing day in and day out as I make my decisions.

The next important decision or cognitive task that I'm involved in is -- and this isn't something that explicitly happens. It's far less likely to happen with young and inexperienced clinicians. It probably happens more naturally and spontaneously with more experienced ones, but I have to consciously make a decision. Do I actually have enough information to make a sound decision? And that's, I think, an often overlooked task, and again, it's an important aspect of day in and day out clinical decision-making, and hopefully this is one of the things that you'll be thinking about when you think about where decision support will be going in the future.

The next important task is if I don't, if I determine that I do not have enough information to make a sound decision, I need to search for additional information, and that involves deciding what other information do I need, and where should I look for it.

The next cognitive task is sort of a check-in, and again I think this is something that happens much more spontaneously and naturally with a seasoned clinician, less so with the less experienced one, but it's a step prior to actually doing an intervention, where I step back and say, have I actually made a fair decision? And what I mean by that is, I have to consciously ask myself as I've gone through selecting the information, interpreting the data, making my decision, making my choices, choosing my options, how might my thinking have been biased? And this is one of the most important things because not only is there that need to do that conscious step, but it's often difficult to even know that because just by its very nature, a bias is something that you really aren't necessarily aware of.

The next task or decision, cognitive task or decision is structuring my approach or my intervention, and this is just simply how do I actually order the steps to do the intervention?

Now, I talked about all of those decisions that I make as a clinician and the cognitive tasks. I'm just going to give kind of a limited list of some of the strategies, not all of them, but these are some of the strategies that I actually apply to make those decisions and go through those cognitive tasks.

I may try to make a match to prior examples that I have been presented with. I may want to compare it to some sort of a reference standard like a guideline or what expert panel has declared are sort of good rules of thumb. I may need to seek additional information, and I may want to consult with others. I might want to create a list and rank order it. I might want to filter the list. I might want to expand the list, but those are some of the strategies that I'm going to apply as I go through those decisions.

So I've taken you through sort of what I do day in and day out as a clinician, how I make decisions and what are the cognitive thought processes that are involved.

That actually sort of gives you what is the upper and lower bound of what clinical decision support in principle could do, and until we think about every one of those decisions and every one of those cognitive tasks, we are probably taking too narrow a view of what clinical decision support is.

So I'm going back to my first question saying, what is clinical decision support? And it's anything, anything that has a potential to influence any role of those typical decisions I just took you through. So it could be any tool that might influence my ability or the quality with which I can detect a current state, detect a change in state, predict future states, identify what the goals and objectives are, identify how to rank order the options, or optimize along one or many criteria.

How do I identify my optimal choices? It could be any tool that helps me assess the quality, the adequacy of the information, whether I need more information to make a better decision. It could be a tool that helps me search for very detailed information, not in general, but to fill in the gaps, and so there needs to be that ability to determine what are the gaps and where to find just the precise information to fill it in. It could be a tool that helps me check for and presents me with alerts as to how my current thinking is biased and corrects for that. And then, finally, it might just be a tool that structures the approach of the intervention, gives me, do this step first, this step second, this step third, this step fourth.

So I've hopefully kind of broadened the view, rather than again sort of describing what's out there now, instead given you a more expansive view for where probably decision support will be in the future because I've gone back to these first principles of how I actually make decisions and what the cognitive processes are.

So I'm just going to sort of conclude with a couple, and this is a very limited list, because I think the Panel hopefully will bring out many more of the thoughts about risks that are driven by these principles.

But I think, you know, I want to start the framework for kind of assessing the risk by thinking about, you know, a failure of decision support. So if there is a failure, is the failure evident to the clinician? And what other cues or strategies might be available to the clinician in the absence of or in the failure of the clinical decision support?

I completely agree with you, Jon, that a lot of the evidence out there about decision support that shapes the way in which we deliver care and helps clinicians kind of move more towards a standard is really overwhelmingly good.

I think that the paradox with decision support is that, you know, if I could rely on my own capabilities, if I thought that my own decision-making and cognitive strategies were rock solid, I actually am unlikely to use decision support. It's only when I already have those vulnerabilities or I lack the cues or the strategies internally that I'm going to rely on that decision support, and that's a really important thing to consider. So as decision support becomes far more powerful, far more complex, it's going to develop to address the existing vulnerabilities or deficiencies in my own learned intermediary capabilities. And if instead I'm perfectly capable of making those decisions, or choosing those strategies, I'm unlikely to use it, so it's low risk just by virtue of the fact that it's going to not really be used very effectively or very widely or broadly.

So the last two elements are the traditional elements of risk, which is given that it fails, what's the probability that a patient is harmed? And when I think about failure, I think the thing I'm much more concerned about is not just an overt failure where I'm aware that it's not there, that that information isn't there, but when it fails in a way that's plausible. In other words, it uses a reference model using data or patient population or patient type that's not sufficiently similar to the one on whom I'm actually trying to make my decision. That's a failure that's not evident.

And then not only what is the probability the patient will be harmed, but what's the severity of the harm? And herein lies another difficult issue around assessing the risk, which is that a single type of decision support may pose a very low severity of harm to a healthy individual, but when used in a neonate who has far fewer margins for adaptation or used in someone at the extreme of life or with severe end organ failure, that severity of harm can vary widely, and so again that's sort of the patient context of use. It's very difficult to kind of make a general statement about what the severity of harm will be when you have a broad range of patients on whom that will be used.

So I'm going to stop here. I hope that the Panel discussion can use this sort of a stepping off point and that we'll be able to talk about these risks, but think expansively, not just about decision support as it is today in 2011, but where it will be in maybe 2013, but certainly far into the future 2020, so that whatever strategy that FDA chooses, it's robust enough or adaptable enough to deal with that in the future state. Thank you.

(Applause.)

MR. PATEL: Thank you. That was excellent. I think it shows the complexity of the topic at hand, and one of the comments I sort of realize I didn't make in the morning was about why talk about clinical decision support as a context with mobile apps?

I think it's an important thing that Meryl pointed out, and I want to just be clear, it is not all about mobile apps. It's not all about, you know, running it on a desktop computer. It is about clinical decision support in general. So we have not answered the question. When it came up as part of mobile apps, and as we have noted in the guidance saying that we could not address it in the mobile apps in that context which is so narrow, we expanded it saying we need to solve this. We recognize that we need to solve this in some way and recognize the different nuances that go along with it, and hence the question was raised in the NOA to say let's just talk about this, figure out a rational approach in the way we can assure patients at the end of the day. So that's really where I wanted to leave it at.

Having said that, I'm not sure -- is there any tweets on the presentations at all? I'm not sure it makes sense to have a Q&A session for the presenters at this time, but we could take a break, come back at 10:15 and we'll have a Panel discussion, discussing the factors, all the perspectives you just heard. Thank you.

(Off the record.)

(On the record.)

MR. PATEL: I want to start with the last Panel, the most interesting Panel at least from my perspective. I've been thinking about this for a long time, not as long as some of the folks on the Panel here, but I think the Panel is going to have a great discussion and hopefully provide insight to what steps we should take next.

We made a slight change. We invited Kristen to be on the Panel as well, really for her to get the discussion going, especially in light of the presentation she made and the thoughts that were triggered.

So at this point, I'm going to have Meghan start the Panel discussions at almost 10:25. We'll go for an hour on the Panel, and I was told to remind you guys, lunch is -- the kiosk outside won't be available for lunch. If you need snacks, go grab them now. It's important. Otherwise, you'll fall asleep.

Sara and Dr. Leo are here and will pass out cards in the audience. If you have questions for the Panel, please feel free to jot them down during the discussion, and we can have it passed onto Meghan who can use that as part of the discussion.

At the end of the Panel, after an hour, 11:25 or so, we'll open up for like 15 to 20 minutes of Q&A from the audience to come in and step up to the microphone and ask questions, and same thing will be on the Twitter. Feedback for people on the webcast can do the same thing. Thanks again.

DR. DIERKS: Okay. I have just a couple of sort of practical issues just to tell the Panel. When you speak, if you could introduce yourself. As the Panel discussion goes on, still just introduce who you are as a speaker, and that will help the transcriptionist. Press the button and always make sure it's green so we can hear you.

And then I'm just going to -- several of us on the Panel had the opportunity to actually present. There are three though, three individuals who have not had that opportunity. So I'm going to actually distract the audience a little going back and forth and allow those who didn't have the opportunity to give an earlier presentation to start.

So what we'll do is we'll start with Satish, and then we'll move to Wendy and then to Mickey, and then come back to Jon, Kristen, Meryl, Stan, and Richard, and I'd like you to just again just introduce yourself, and with the exception of the three who might give a slightly longer introductory statement, the rest of us, just very brief, very succinct, trying to answer the two framing questions for this Panel, which is what factors should FDA consider in determining the risk classification of different types of software that provide clinical decision support? And what is an appropriate approach for assessing reasonable safety and effectiveness of these types of software for each of these factors? And then we'll sort of go from there.

And, Bakul, I'm going to ask a request. If we have lots of questions coming in at one point, if you will just kind of give me a little bit of a signal, and then we might be able to do the questions a little bit earlier.

Okay. So one last thing is, I didn't really introduce myself this morning. I just sort of started with the slides, but I'm Meghan Dierks, and I'm formerly a practicing trauma surgeon. I now do largely informatics work, both research and operational role at Beth Israel Deaconess Medical Center, and I'll be moderating the Panel.

So we're going to start with Satish, and again, introduce yourself and then give us sort of your framing answers.

DR. MISRA: Hi, everybody. My name's Satish Misra. My role here is as a partner in a group called iMedical Apps, which was kind of born out of the idea that mobile medical apps were useful for clinicians, and there was no real good way for clinicians to know what was useful in the disorganized Apps Store. Now, what we do is put together a group of clinicians who review medical apps, devices, and try to put a filter on what's going on in the mobile healthcare space and interpret that for the healthcare providers who aren't familiar with the terminology and the business lingo.

In my daytime job, I'm a second-year resident at Johns Hopkins. So I apologize if I'm a little slow on the draw. I was on call last night.

One thing that I think we've been really happy to see is the outreach to physicians and healthcare providers in shaping this entire document and getting feedback on it. I think one thing that we certainly acknowledge is that physicians have not been at the table in a lot of healthcare legislation and policy, and it's, you know, exciting to see that we are given that opportunity now. So we certainly appreciate that.

For the questions that are posed to the Panel specifically, there are two important considerations that we would pose. The first one is that we're dealing with an extremely diverse app ecosystem that medical apps really run the gambit from apps that have small errors, could potentially cause fatal outcomes, to apps where major errors are meaningless.

And I think the best way to understand that would be kind of anecdotally. So taking a medical calculator that helps you use risk stratification scores for renal failure, those tend to be very small parts of the overall clinical picture, and the clinician has to put it all together, and while that certainly would fall into the category of decision support, where you're taking data, inputting it into the app, and it provides you some analysis, the analysis it provides is a small part to a much bigger picture, and it's left to the clinician to interpret that.

So if the app gives one outcome, one result, that's not going to define what I do. It's a small part of my overall decision.

On the flip side is an app that you input blood glucose measurements into, and it gives you a calculation for a sliding scale dose of insulin or a long acting form of insulin where a mistake could be fatal. And I think the challenge posed here is that trying to apply the same regulatory framework to such a diverse group of apps will certainly be a challenge.

The second consideration is where it is considering as we develop regulations for these apps, where the regulation ends and where kind of the caveat emptor starts and where it falls to the clinicians and the healthcare associations to provide best practices, to be the judges of these apps, follow appropriate practices. We've vetted them. We think they're good, as opposed to the FDA or other regulatory agencies saying that they are useful.

So one thing that I would pose is that healthcare associations, professional associations may have a role here in helping define what apps are useful in taking some of that burden off this regulatory framework.

DR. DIERKS: Okay. Great. Wendy, can you introduce yourself and then give some introductory comments?

DR. NILSEN: Hi. My name is Wendy Nilsen. I'm with the Office of Behavior and Social Sciences Research at the National Institute of Health.

My perspective coming here is from a research perspective, and we recently at NIH, in combination with Robert Wood Johnson and the National Science Foundation, ran a meeting called the mHealth Evidence Meeting. And one of the reasons we ran this was to think about how do we generate evidence for our mobile technologies? How do we think about that? What methods are out there? And I think the last speaker really brought it home.

There's a very, very diverse thing, and we need different levels of standards, you know, different levels of evidence depending on what we're doing. When we're doing these well-validated algorithms, that's one level of use that you're going to have, one level of evidence that you're going to need to generate, but if you're thinking about some of the complex clinical decision supports that we've seen here today, that's a completely different level of evidence.

And so one of the things, you know, as we think about this is obviously the FDA guidance has been written to think, you know, to frame it in this way and to think about it because this is really the level of appropriateness, thinking about there are a range of methods we can use, including just making sure the algorithms are transparent and out there, especially as the last speaker said, you know, when the risk is minimal because they're one small part. If you think about BMI, if the error was there, it would be much less severe than some of these other things, especially with some of these very complex algorithms or automatic processes that are developed that probably are not mobile at this point, which will probably be soon.

The other thing that I wanted to say was, you know, because when you're thinking about this, it's the clinical effectiveness here. So thinking about how do we test this, and I think it came up this morning, and it came up yesterday, how do we continue to evaluate these things? It really seems to call for a postmarket surveillance on this. You know, we looked, this morning you were talking about how you had feedback loops, but how do we continue to think about this and evaluate the effectiveness of these clinical supports over time? And that seems to be, I think making a rating is one thing, but I think for more complex clinical decision supports, really the data over time is going to do it, and somebody had talked about yesterday how the work with CMS and FDA on stents had really helped, and I think that this shows some of the partners, and I would include NIH in that just because I am from NIH.

But I think it's really important to think about how do you continue to evaluate these, because as you've all noted, these are very complex decisions. When we're talking about complex decision support, and I don't want to lump them all together, you don't need BMI to be continuously evaluated, but you do need some of these very complex decisions, and thinking about continuously evaluating, I think, is critical.

DR. DIERKS: Great. Thank you. And, Mickey, can you introduce yourself and give some framing discussion?

DR. SEGAL: Thank you. My name is Michael Segal, and I'm going to focus very much on the question we've been asked about, what would be good scoring criteria for applications that should get high scrutiny and what type of regulators should stand more at a distance?

My background is as a neurologist and neuroscientist, and I'm the founder and chief scientist of SimulConsult which does decision support software to help doctors do diagnosis, focused around the area of 2,700 diseases, mostly in neurology and genetics, and this is being pretty widely used at the moment in 95 different countries.

I think chief among the criteria for regulation should be the degree to which people rely on the software as an autopilot, as opposed to an advice system, and I think that the research in the field and our experience going back to the beginning of this has been very much that doctors look at decision support for diagnosis as an advice system and not as something that they will just blindly follow the advice.

Now, the thing that both we and the literature have focused on is that doctors care very much about the degree to which the tool can explain what it's suggesting and explain the reasons for it, and we have a number of important screens in which we go through that information and people tell us that that's what they most value, and research going back three decades by Ted Shortliffe and others have found very much the same thing and also have valued very much the ability of such systems to keep up to date, and we've actually adopted very much of an open database, evidence-based architecture that's highly quantitative. It's really been the ultimate in terms of opening up information and allowing people to focus on what might need changing, what might need updating.

I should point out that the ability of something to be autopilot versus advice depends on who you are very much, and we've restricted our software to medical professionals in part because of uncertainties of how lay people would be able to deal with decisions. Is this ataxia? Is this dystonia? Is this myoclonus and things like that? And these are just hard things for lay people to do, and lay people will not have access to any of the testing procedures. So we focus very much on this type of decision support as being something for doctors, and I'm sure that any sensible regulatory approach that comes out of this will make a very clear distinction between doctors and consumers in terms of who's using something.

The second key criteria would be the workability of regulation. One of the things that surprises people about what we're doing is that we come out with updates to our database several times a week typically, and so any type of Class III premarket approval for our database would be completely unworkable, and while we think it's very important to do studies and, in fact, the National Library of Medicine is funding a study to look at the efficacy and cost effectiveness of our software, I would caution that even the study design for that study actually involves making evidence-based changes to the database. So not only is things changing every few days, but the process of evaluating actually makes changes. So medicine as we know keeps changing, everything's changing, everything gets ripped out and every decade or two is completely different, and we should keep in mind that it's very much a moving target, and it's more a moving target than most people would imagine.

I would imagine that what people would consider for something like this is more something along a Class I type of regulation, and in some respects would be seen as the sort of poster example of that because we have an incredibly evidence-based approach where a user can focus on a particular data point of a finding in a disease and automatically, through capabilities we've had for years in the software, submit information that if it gets through editors would get into the general version of the database. So in some ways, we'd be the perfect example of showing why Class I regulation sort of gives you the best of peer review and Wikipedia and computational abilities and so forth.

But let me add two cautions. When people were talking about the cost of such certification yesterday and throwing around numbers of 40,000 or 10 times that much, when we were starting this back close to 30 years ago, that would have been a deal breaker. I was a resident. If there had been enhanced regulation of that sort, we wouldn't have started this, and one should be careful not just in terms of preserving the good things that we see today, but in terms of encouraging the future types of innovation.

And that's not just from the sort of the garage/living room type of startup. An example that many of you heard in the news yesterday is Watson, IBM's technology which was validated in the realm of playing Jeopardy! but is being looked into as can this help in medical decisions? The ability to do an evidence-based questioning of why did this diagnosis not work out well is something we could do fabulously and we do as a part of a continuous quality improvement, but if you look at something like Watson, it would be hard to do that, and we don't know how effective Watson is going to be, but I would be careful of not just about spooking the small companies, but I would be careful about spooking the Watson-like innovations because after all, things like Google are helpful in terms of looking up medical information, and I would be very careful about putting something like Watson into a classification where they feel it's just not worth the trouble for regulatory purposes.

The last of the three criteria I'd like to suggest is the alternatives to regulation, and there would certainly be some information in Class I certification of our software that will, of course, when the National Library of Medicine study comes out, that will be taken very much more seriously by clinicians, but in some ways, I think what's taken most seriously by clinicians is things that other people have alluded to, which is community validations and people mentioned professional societies, our software's used as part of case-based education by the Child Neurology Society.

One of the top National Library of Medicine resources called GeneReviews is something that we not only use, but have fed all their content into our information about diagnosis, and we also refer to all of their articles so people can sort of see does this diagnosis make sense, but they link back to us implementing something that they were asked by their users, can you have something that helps us distinguish, if maybe we're wrong in this diagnosis, can you have something that takes a bigger picture or view of, looks at the findings in this disease and then suggests a full differential diagnosis, not just a canned one based on the disease, but based on the findings in my individual patient?

And these are the types of validations that are taken very seriously by people in the field, and they've involved a huge amount of work for us in terms of working with GeneReviews, but we consider this core to our mission, and if we had to incur substantial costs going through Class I certification for the FDA, I don't think that would be of much value to our users, and it would certainly distract from our other efforts which I think are more key to providing good medical assistance.

So my recommendation would be for these three criteria, when all those three lights are flashing, those should be the situations where we really go after immediate regulation, and when you're talking about things like regulating devices for delivering our radiation and radiotherapy, all three lights would be flashing on that, and one should go after that with vigorous regulation. When the lights are not flashing, one should be much more careful and one should avoid situations where the FDA could be seen as curbing innovations.

So one wants to avoid the thalidomide type of risk, but one wants to avoid choking off an area that many people see as one of the keys to giving us medical care that is both cost effective and accurate, and if we choke that off through ill-considered regulation, we would be doing a lot of damage.

DR. DIERKS: Okay. Thanks. Stan, introduce yourself, and if you could build on what you gave this morning as a presentation, just very succinctly, a couple of comments about the framework.

DR. PESTOTNIK: Hi. My name is Stan Pestotnik, and as I think about the risk factors associated with clinical decision support, several things come to mind.

One I think we talked about this morning, the idea of a human intermediary. So I think that indeed is a risk factor whether they are present or whether they're not, and that gets us to the idea of whether it's open loop or closed loop decision support.

I think another risk factor that should be considered is the intended user, whether it's a clinician versus a non-clinician.

And then I think a framework around the different what I will call levels of clinical decision support, I think they have different levels of risk. Alerting and reminder systems are going to have a different risk category than those decision support applications that assist with therapeutics versus those that assist with diagnostic issues.

I think the transparency of the knowledge base is an important thing to consider, and I would reiterate my comments about the implementation of these systems, and I concur that a postmarketing surveillance type of a system, similar to a MedWatch version for reporting risk and harm, would go a long ways in us being able to intelligently assess what are the risks. I'll leave it up to the Agency whether that's voluntary or mandatory reporting.

DR. DIERKS: Great. And, Richard, and I'm going to ask, Richard, also if you can maybe make a quick comment about whether you think decision support that's developed by an individual clinician for use in their own practice, never to be sold, never to be packaged, never to be redistributed, should be treated differently.

DR. KATZ: This is Richard Katz from GW. I agree with Stan here about these risk factors, and we sort of outlined some of them depending on the type of interaction, how standalone this is, and the clinical situation, whether it's patient-based, clinician-based, or some mixture thereof. Whether it's prescriptive or non-prescriptive is something we obviously hit on something already.

I also wanted to sort of address, too, and have us think of, what is effective? What is our impact? That's just to sort of summarize that, and as I put together what I'll call clinical trials, and I'm going to get to that in a minute, you know, what are the endpoints here? Well, obviously is the patient hospitalized? Does the patient keep going back to the emergency room? Can we save money by having them stop going to the emergency room? Did we encourage more office visits or did we actually reduce more office visits because of patients better being taken care of? Medication adherence, have we improved that or not? Medication errors, have we reduced them or not?

An endpoint would be just have you enhanced a patient's self-management skills, their confidence? That in itself is going to probably translate to better clinical outcomes. Are we meeting standards of care? And so there's that checklist of things that would be at least minimal standards of care for quality of care, and then finally some cost impact. These are sort of what are our goals.

How we assess this and regulate this is going to be important. When we go to sort of traditional medicine decisions of whether a drug gets approved, we're looking a lot at randomized clinical trials, and when Wendy put together this meeting with Robert Wood Johnson, this last month, that's what we struggled with was and what we're talking about is this moving target. This is a technology that is added onto clinical care that's really evolving, and it's really tough to set up a clinical trial and stick to that trial in a randomized clinical trial and not change the protocol in midway because there are all kinds of things that are changing as how we adapt through this, and it's going take you a year or two to find something out and meanwhile you'll have changed it midway, and so there has to be a little difference rather than just relying on classical, randomized clinical trials to judge the impact also while monitoring for safety as well.

When it comes to the individual, well, sort of the self, sort of homegrown system, again I think that it goes back to what Stan has talked about here, is what is the reliability and validity of this? What is the accuracy and what is the source of information that's it's based? And that is going to be a hard thing to do for a single individual. The harder thing to do is what we're talking about is selecting which is a good one, and whether it is as reliable and where the community is going to get its AMA or whatever as approval and sort of be in the preferred list of apps.

DR. DIERKS: Okay. Great. Meryl.

MS. BLOOMROSEN: Thank you. Meryl Bloomrosen from AMIA, American Medical Informatics Association. Two questions, short answers I hope.

In terms of the factors that FDA should consider in determining risk classification, we believe one of the most critical issues is that which has been mentioned several times already, and that's whether the clinical decision support is mediated by a human being or not, thinking that the most rigorous attention to applications which in an automatic and autonomous fashion provides clinical decision support and intervene directly on the patient's care.

This is in part based on the capacity for the intervention to do harm, but we think there are some other issues. Again to reinforce what has already been said, the nature of the clinical guidance, its propensity to cause harm, how the information is presented, and other factors. We would note work done by AMIA and others, other Federal Government agencies, as well on usability issues, of how data are presented and how these apps work. Also there's an increasing body of knowledge and evidence about the potential of unintended consequences of health information technology, and we think that literature and that research needs to be brought to bear on FDA's considerations.

In terms of some kind of risk prioritization, here are some of our ideas. We think the following kinds of options could be considered. Clinical decision support that interacts directly with the patient or the consumer might, in fact, provide a moderate amount of risk. Forms of clinical decision support that provide guidance to the provider but leave it up to the provider, and I think there again, clinician, provider, that term itself needs further clarification. Is it just the physician? Is it the nurse? Is it the physician extender, et cetera? But it's up to the provider and leaves it up to the provider to accept or reject the guidance or some other folks on the Panel has used the word advice. That might be something to consider. That seems to us that it presents a lesser risk to the patient because a qualified professional is acting as that learned intermediary between the clinical decision support advice and the patient, and then there might be other forms of clinical decision support that do not make patient specific recommendations and are always intermediated by a provider.

So there could be some other ways of thinking about this. For example, autonomous clinical decision support that affect patient care directly, again without provider oversight. There could be patient-directed clinical decision support with and without provider oversight. And then there could be human-mediated clinical decision support that's not intended for the provider. It's intended for the consumer or the caregiver, and there could be human-mediated clinical decision support that's not patient specific. It's more general.

In terms of appropriate approaches for assessing reasonable safety and effectiveness, sort of our mantra at this point is that we believe a coordinated effort across the Federal Government but also between public and private sectors is very much needed, that any stakeholders brought to the table should build on existing models and approaches, leveraging research and evidence and ongoing work in fields such as informatics, quality assurance, and patient safety.

There's a lot that's been undertaken in the "classical" or "traditional" HIT sector, and we think that should be very much considered, and we look forward to being a part of those conversations. Thanks.

DR. DIERKS: Kristen, do you want to make some framing?

DR. MEIER: Kristen Meier. As I indicated, I'm a mathematical statistician at the FDA. I've been a reviewer here since 1995, working originally with in vitro diagnostics and then shifting especially in the last five years to work with diagnostics in the area of neurology, cardiology, and ophthalmics.

In my presentation, I talked about some of the factors that FDA's considering. Just to quickly restate, we talked about the level of impact it has on the subject, the degree of acceptance in the clinical practice, and the ability to easily identify erroneous output through just several different kinds of sources.

I didn't cover in the talk the second question about how you might look at the safety and effectiveness. I just want to say that in general, when we look at diagnostics, there's a whole range that you could look at anywhere from you could do at the statement of conformance to certain standards, to software verification, validation to then actually getting clinical data in terms of how reproducible is the output from this to validation of algorithms internally and to the full-blown clinical studies, not just clinical trials with diagnostics.

Again, we look at what we call diagnostic performance, and I used that word "diagnostic" in a very broad way. I know that a lot of people think that means you're making a diagnosis. We use that term very broadly at FDA to be all kinds of devices that assess patients, and again I encourage you to actually look at that guidance document that we just put out. That would be the high end of the kinds of ways we might evaluate, but again, there's a full range in there, and I'd be interested in comments from the Panel about what, you know, range is needed, what level might be needed for different kinds of risks. Thank you.

DR. WHITE: Thanks. Hey. Jon White from the Agency for Healthcare Research and Quality.

I'm probably just going to offer more nuance to the points that have been made. I think that most of them are well stated.

What factors should you consider? You know, you're on pretty solid ground if you start talking about never events, right. There's certain levels of ketamine that you really shouldn't be giving to anybody unless they're a Bengal tiger. Therefore, that's the kind of thing upon which you could, you know, firmly focus and say, you know, decision support that guides Dr. White, you know, in his sleepy state post-call to, you know, give X amount of ketamine to somebody is probably not a great idea. So that's something that feels firmer that you could sit on. So a high level of rigorous evidence to go along with that.

The presence of the learned intermediary has been mentioned several times. The nuance that I want to add to that is that, well, we talk about, you know, Dr. White's judgment, but increasingly you're seeing forms of decision support that either introduce capabilities that Dr. White has no capacity to be able to do, such as processing of, you know, 100,000 different medical images, you know, to aid you in the diagnosis of a certain thing, right, or that a panel of Dr. White's learned peers has vetted. For example, within partners, we fund a demonstration where the clinical community of partners coalesces around certain kinds of rules. Should that bear a different level of responsibility for adherence than, you know, just Dr. White's judgment alone? So that sort of nuance when you start getting into more intermediary, and as mentioned before, business practices of hospitals or ACOs or patients in medical homes is going to start factoring into, you know, how you consider the implementation of these things.

Now, I also want to, you know, I'm lapping a little over into approach here, so let me just say, that I'm moving to approach. So what's an appropriate approach? In a word, caution. Okay. What I've heard from everybody here has been fairly consistent, that this is a lot more complicated than just saying, okay, one, two, three and four, five six. Okay. It's nuance. You're getting into some of the science of medicine but a lot of the art of medicine. So being very cautious in the approach to regulation here, I think, is absolutely warranted.

How much of this is something that should be regulated at the outset or certified in a product as mentioned earlier versus accredited by, you know, looking at a hospital and their practices, you know, by certain organizations that shall go unnamed. There's a line there. It's a little fuzzy but, you know, that's something there.

Along the lines of the professional organizations that have been mentioned, you know, I have never gone wrong when I have adhered to nothing about us without us, which is a great approach to dealing with doctors, okay. So, you know, kind of linking arms with professional organizations and moving ahead and getting them on board with the approach is going to be key.

I do come from AHRQ. So I will offer you a couple of specific avenues for some of the things that we've talked about. Postmarket surveillance has been mentioned. AHRQ has been working closely with the FDA on common formats for patient safety reporting. There are patient safety organizations established in statute that are avenues for anonymous reporting of these things and aggregatable reporting of these things.

Additionally, I know you know that we've talked about perspective systems for evaluating implementations like the work we've been doing with Geisinger, and I know you know about that. So I won't belabor that issue, but I think there's specific things that we can work with you on there as well as other federal agencies. ONC is not here. I don't want to represent them, but they're clearly folks that you want to kind of pull into this orbit.

The Centers for Education and Research on Therapeutics, the CERTs, that has largely been something that CDER, instead of CDRH, has worked with, but that is a long-established collaboration between AHRQ and the FDA on drugs, but for the past four years and for the next four years at least, we've specifically funded a health IT CERT, and they look at a lot of these issues. So that would be another great venue for gathering both of evidence as well as expert opinion for us to work together on.

The last thing I'll just mention is that when you talk about an appropriate approach, you know, something that is kind of mush right now but needs to be considered is, you know, the whole usability aspect of this. It gets a lot of talk. I haven't, you know, there's some good stuff out there, but a lot of people are doing a lot of talking. I would say that it's something that we should absolutely consider but we need to learn a lot more about it before it moves ahead. So thanks.

DR. DIERKS: Great. I'm going to kind of ask the Panel to kind of turn a little bit, and let's, you know, one of the reasons that FDA strives to get the risk classification on conventional devices right is that the class then drives what kind of measures you need to take or what the FDA believes you need to take to maintain the safety of these products for their entire life cycle while they're out there in the field.

So I'm going to ask the Panel for a moment to let's say, for the sake of discussion, that we arrived at consensus on the right way of risk stratifying. Let me ask about, you know, how we would propose managing some of the things that the FDA then tries to strive for in controlling the risks, things like labeling for example, labeling which, to those of you in the audience who aren't as familiar with the conventional approach to labeling, is everything from the user manual to the actual description next to the boxes where you put the data in to, you know, sort of some level of transparency that gives the user a sense for how the logic is flowing, for example, so labeling being one of them.

We talked a little bit about the postmarket surveillance, but I think that's a very challenging issue.

And then, you know, I'm going to ask, I think I'm going to ask Stan specifically to address this, which is the maintenance corrective action and corrections, how do you actually maintain this over time, particularly if you develop a clinical decision support tool that's so highly customized that, in fact, every implementation represents a different product?

So I'm going to sort of throw that out, and I'm going to let people sort of respond to those particular issues. Let me see if there was one other -- well, I think those are kind of important things because those are the controls that we put in place to manage the risk. So, again, not having settled on the answer of the risk stratification, how would you propose good or bad use of those other controls for decision support?

Anyone want to jump in?

DR. PESTOTNIK: Well, I'll take the issue about the maintenance of the knowledge basis.

I think it's incumbent upon us to have different versioning of the software, the knowledge bases, so we have a very structured process for versioning the knowledge bases to know how the knowledge may differ at one institution versus another. We have built within the software so that when someone chooses to follow a recommendation or conversely chooses not to follow a recommendation, they log that into the software that is then transmitted through our BPN accesses to our different sites back to Salt Lake where we then keep a record and a log of that. It goes into our customer tracking system. We have our own developed severity scoring type of a system. So if something is graded as a severe error that could cause patient harm, of course, we're going to push out a patch to everyone in a very short period of time. We notify them that that risk occurs. Fortunately it doesn't happen very often, and we then have to prioritize.

And, you know, as we've got more and more customers, now approaching 400 hospitals and integrated delivery networks, it becomes a major task, and it's again incumbent upon us to put these processes in place. We are building a quality management system with a full CAPA and all of that to handle this, but it does come with a price.

DR. DIERKS: So let me ask a really important question which is if you had a subset of customers who chose not to take the patches because, in fact, you know, they sometimes involve downtime, they produce some system instability for a period of time. Would you then -- at what point would you consider deinstalling if that customer chose not to sort of maintain an updated system or periodically didn't accept a patch, something like that?

DR. PESTOTNIK: Luckily we've not had that occurrence to date, but we think a lot about that, and I don't know if I have a good answer today. I mean, I think it would be an obligation of ours that if that did happen, we would have to have a serious conversation with the executive sponsor and decide, you know, whether we turn the application off or not.

DR. WHITE: So I've been talking a lot as both an informatician and as a Government guy. Let me put on another hat, a different hat, the doctor hat. So this is not necessarily, you know, I'm not well versed in the ways of regulation and all the different tools at your disposal.

That said, it would be real useful to me as I'm trying to decide which of the various products that I'm trying to use, to have some sort of a standardized way to know what evidence underlies the recommendations that I'm getting, okay, and this gets to the user issue, right, to the sole issue of the learned intermediary and how do we train the users to know what they're using and, you know, when they're in their depth and out of their depth.

You know, often now I have to look at a given product and take it on the name of the company or the face value or kind of a generic statement about, you know, we do use evidence-based stuff. And if I don't want to dive any deeper and find out what that means, I'm good, but if I do want to find out, dive any deeper and find out what that means, it's sometimes challenging. So working with the industry and the users, in this case, the clinicians, to come up with some sort of an agreed upon standardized way to represent what that means, that would be useful. That would be a great first step and, you know, it's not something that you would have to have everybody use, but it would sure make it a lot easier for me to make those decisions if it were there.

DR. DIERKS: And yes. I just want to remind people to introduce yourself again just for the purpose of transcription.

DR. MISRA: So, Satish. So one comment about that, a follow-up response to what you had asked, one of the products that we're actually working on now is developing a standardized evaluation system for validating medical apps. So there's actually a lot of literature behind doing that.

Some folks out of Florida and out of California have published previously on it and a group out of Canada, and so we're kind of working with them to coalesce that into a single evaluation system looking at something like 100 to 150 criteria for every app and providing some of that feedback. So hopefully that will be useful to people.

And then getting back to your question of how you assess safety and follow-up, I think one challenge is when you have a physician who's using these apps or distributing these apps, you have somebody who's familiar with that kind of postmarket surveillance, who knows if there's an adverse event, that it needs to be reported. I think one challenge will be with decision support tools that are directed directly at patients.

And an example I'll give is with behavioral modification apps, so for alcohol abuse, there's about 220 apps in the iPhone Store, the iOS Store alone, that claim to treat alcohol abuse. About six or seven percent of those actually use any validated best practice sort of approach in doing that, and so I think the challenge will be -- so when you're looking to implement this, at the outset, do you shut down the other 200? Do you remove them from the Apps Store? What do you do with those 200? I mean there's no commitment now to following any sort of evidence-based practice. I mean there's certainly not going to be a commitment to looking at risk and following any sort of adverse events. Granted, those are low risk apps, but at the same time, they are taking patient information and outputting some sort of advice to the patient, and there's certainly a lot of opportunity there if there were effective apps with good feedback mechanisms.

So I think one challenge will be addressing what's already out there, and what do you do with the 200 alcohol abuse apps? I think it's 170 or a similar order of number for smoking cessation apps, and we're looking at diabetes apps right now actually, but it's kind of across the board. There's a lot of, for lack of a better word, there's a lot of junk out there, and what do you do at the outset? I think that will be one thing that we'll have to consider once this actually starts to roll out.

DR. KATZ: Richard Katz. But in reaction to that, there's a lot of crappy information on the Internet, and I don't think we need to get -- and there's a lot of standard of care, and going back to my back to basics comment before, that if you're dealing with relatively low risk kind of things, there are ways to stop smoking. There's ways to stop drinking. There are ways to diet. There's way to lose weight. There are ways to exercise. There's a lot that's not particularly high risk, and so I think we have to be careful to not -- because we're not regulating handouts to people, whether it be electronic or paper. And so we need to be able to bite off something that has a moderately high to high risk.

DR. DIERKS: So, Kristen, since you're from within, can you talk a little bit about historical roles that FDA has taken for some of these things that I think are now sort of being thrown around perceived as being low risk, like weight reducing interventions that had previously been out there advertised, et cetera, and what FDA's, you know, position had been? Maybe I'm putting you on the spot as a historian, but --

DR. MEIER: Do you mean people who are actually selling these things or -- this is Kristen.

DR. DIERKS: Yeah. So things that I think Richard is sort of kind of framing as maybe potentially being off the table because, you know, they're sort of low risk.

DR. MEIER: Well, I can certainly say as a statistician I've never reviewed these, and I don't know, Bakul, if you want to speak to this, but I don't think we -- yeah.

MR. PATEL: So as part of the guidance, we addressed that particular issue and we demarcated where regulations or regulatory requirements apply or not. So simple textbooks, fitness where it talks about weight management or doing exercises, even in my presentation yesterday I pointed out that those are not part of what we're looking at as an important thing, and again, it goes back to the whole discussion on risk.

As part of the regulatory structure, those things don't have regulatory requirements. If you're practicing medicine in a certain way, like dispensing drugs or compounding something on your own, that's not considered as part of that, but if you start using tools to compound that, that whatever treatment that you're titrating, those tools made by whoever, you're having some level of confidence, that needs to be sort of considered in the whole process. Who is making it, how well it's made, and are you expecting the same results every time, and that's really what it boils down to.

And going back to some of the simple things that people expect to happen with these tools are simple and they, of course, place lower risk than some of the complex things we've talked about, and I think the Panel discussion is great, because I'm very attentive and listening to this, about is there a way we can draw the line in terms of where's the transition happen between the really low risk, which doesn't pass common tests, common sense tests and gets into the, okay, now I'm starting to get concerned about it. So that's really of interest.

MS. BLOOMROSEN: Can I --

DR. DIERKS: Yeah, first Meryl and then Stan.

MS. BLOOMROSEN: I think you're hearing some fundamental agreements across what I would think is like an evolving multidimensional characteristic way of looking at this. It's not necessarily on or off, black or white. I think going back to some of our earlier remarks, defining clinical decision support in general and then identifying what components or aspects or attributes of that clinical decision support and how it is offered and/or delivered, and to what extent the FDA is or isn't thinking about providing oversight and/or regulation, would be a way to start, and then with or without the risk delineation, I think -- I know we would hardily agree with the emphasis on transparency and the transparency needs to be in terms of the knowledge base that's used, the data sources that might be involved in creating this clinical decision support offering, whatever it is.

The inferences that are used in what may be the black box shouldn't be black box. It should be transparent, as transparent as possible. What the clinical decision support tool is doing with these data and the evidence, not just the fact that it's bringing this evidence in, but what is happening with the evidence in this way and really who the intended user is and what the intended uses are needs to be thought about. So transparency, transparency, transparency.

And also I think some additional research might be warranted in terms of further analysis and development of test protocols and methods to look at the integrity of these components of clinical decision support.

DR. DIERKS: Stan.

DR. PESTOTNIK: Stan Pestotnik. As I read the guidance, I noted that EHRs and PHRs seemed to be excluded from the definitions in the guidance, and it was sort of a head scratcher for me because as we look to the horizon and we see the advent of ACOs and other initiatives like that, and I look at the market and I see referential content providers developing very nice order sets and now longitudinal care plans that end up being encoded into different EHR systems, I'm left with the question as to are those clinical decision support applications? Who is held accountable there? Is it the referential content maker? Is it the EHR vendor? And so it left me pondering with no good answers.

DR. DIERKS: Kristen first and then Mickey.

DR. MEIER: This is Kristen Meier. I just wanted to follow up on one of your comments and maybe other comments on this, Meghan, when you talked about labeling possibly and use of disclaimers. I would say that a lot of folks at FDA do depend heavily on labeling. I don't know how much of it's read, and we also have decision summaries that are posted at the FDA website and summaries of safety and effectiveness that describe a little bit about the kinds of studies that were done.

But I am curious how effective folks think labeling would be to help users understand risks of devices, whether you would really need to build disclaimers into output that's actually being presented for folks that won't take the time to read a user's manual, which my guess is a lot of folks.

And also in commenting, folks talked about transparency. Again, I just want you to think about the notion, especially for folks that develop these, would a company really want to disclose that? That's proprietary information, and is that a realistic mitigation really for some of the safety concerns? Thanks.

DR. DIERKS: Hold the thought for one second. I just want to reflect back. So one question that comes to mind is, do we count -- so when you present a probability that's based on some sort of internal calculation to a clinician, and the manufacturer chooses not to produce uncertainty bounds, and the user is relatively naive to that, you know, is that an element? Would you consider that that be an element of labeling, sort of mandatory inclusion of these sort of minimal metadata around what gets presented? It's almost kind of a way of disclosing the limits of what the output of that decision support is.

DR. MEIER: This is Kristen. That's certainly one way. I don't know if I'd call that labeling. That's almost product design.

DR. DIERKS: Yeah.

DR. MEIER: Yeah, but, yes, absolutely that's something we're trying to use to make people realize that these numbers aren't hard and fast without any +/- associated with them.

DR. DIERKS: Okay. So I think it was Mickey next and then Jon.

DR. SEGAL: I want to weigh in on the transparency versus black box issue because it's a really crucial issue. I'd like to defend the black box approach but state that we have taken the ultimate transparency approach, and we've even done it even though some investors, potential investors have said they're not interested in dealing with us if we're going to be transparent. So I certainly understand incentives against it.

We have patented all our crucial algorithms and fully disclosed them. Our database is completely transparent. You can go in and see all the information that we use. I believe in the superiority of that approach for at least most things, but I would defend the ability of people like Google or Watson to do things that are black box, not strictly from a commercial point of view but from just an algorithmic. It's too hard to explain what they're doing, and so I would feel uncomfortable with writing rules that would exclude them even though it might benefit us in some way because we do the opposite, but one should be very careful about that, and although I'm probably among the most who have taken financial risks to advance the transparency approach, because I think that's the best approach and that's how we do things in medicine, we need to think more broadly and to be more permissive of the fact that other approaches may be better for some things.

And the second thing I just wanted to throw out, I think it's been implicit in some of our discussions is that although the guidance specifically excludes an electronic health record in itself from this guidance, to the degree to which they have decision support built into them, my understanding is that would be covered, and there's certainly a lot of people who are working on things like that, and I know specific examples from your division, that people are worried about that.

And so there's a danger of -- we're not just spooking people in companies. We're spooking people in some of the academic departments that have done the most innovative stuff on electronic health records. So we have to really be worried very broadly of spooking all these groups and spooking the investors, and there's a lot of people who sort of look at this and see a worst-case scenario, and it would be nice if we could reassure people that the approaches are going to be modest and incremental.

DR. DIERKS: Jon, did you have a comment?

DR. WHITE: Gosh, this is such a great discussion. I want to say again, one of civilization's most interesting and challenging tasks.

It felt to me like we were talking a lot about sins of commission, and I just wanted to make sure that as we reflect on all this, that we also reflect on sins of omission, and specifically when we as users of the system expect that it's going to help us catch certain things, right, and unbeknownst to us, Rule X was turned off because somebody was annoyed that it alerted them too much, and so it's not catching X. So there's sins of commission where this system tells us to do things it shouldn't, but then there's things of omission when we assume too much from the system, you know. Can you apply a label that says this advice has not been evaluated by the FDA, you know, to stuff that comes out of -- I don't know. So --

DR. MISRA: Just one quick addition or one point I wanted to add. An additional consideration for risk, I actually completely agree with Dr. Katz, that a lot of the apps that I was referring, the low risk apps, that's one category. I think when we're talking about patient harm, we talked a lot about overdosing insulin and things like that, but I think another important harm to consider is, especially for apps directed at patients, is delay to treatment.

So there's a number of apps, for example, that will evaluate skin lesions or evaluate hearing or evaluate vision, and as these apps proliferate, I mean there was a study in the British Journal of Dermatology about -- this was actually folks in Intella Dermatology, and that while they had provided a valuable service, there were some data that the use of that service was actually leading to missing other lesions that a person had. So a person would be referred for one specific lesion whereas to an Intella Dermatology program it would look at that one lesion and say yes or no, which is sort of similar to what an app can do, but they would lose the interaction with the dermatologist who would do a full body skin exam, and there were actually a number of lesions that their data suggested were missed, and so I think another important risk consideration in addition to direct harm is delay to treatment or delay to evaluation that a lot of patient-directed apps could potentially pose.

DR. DIERKS: Yes, Meryl.

MS. BLOOMROSEN: I'm struggling with my own, our own use of terms throughout the day, and I would again go back to some of the basics that, you know, we've talked about basic practice, but I think we need to go back to the language of the proposed draft guidance and actually relook at our potentially confusing use of terms like apps and, you know, how they're used in the common vernacular versus how they may have been intended in the draft guidance, and thinking again about devices versus delivery mechanisms and clarification on the point that was recently mentioned about whether or not your draft guidance was intended to potentially address CDS that is part of electronic health records or personal health records. I think that's a very gray area in terms of how it's currently presented.

DR. DIERKS: So what I kind of sense then is that it may be helpful for the FDA and all of us, I guess, to sort of first answer one question. When we use the word clinical decision support, do we want then to sort of collectively agree that we're talking about its use by some type of a healthcare provider? And I don't know the answer to that.

And then the second sort of question is are there any things that we could maybe move toward consensus on that are sort of off the table or sufficiently low risk? And maybe off the table is the best first category because examples were given in the Panel about applications or decision support tools whose primary goal isn't clinical effectiveness but maybe is cost effectiveness or quality, and I think traditionally the FDA has sort of focused on clinical effectiveness and not really figured that it was totally within their core mission to, you know, monitor the extent to which some product actually meets its cost effectiveness or a particular quality standpoint. So --

DR. MEIER: This is Kristen. We actually aren't allowed to look at cost in our evaluations.

DR. DIERKS: So then I think we have consensus that any application that sort of, you know, utilization, for example, that's primary goal is providing guidance on reducing utilization or reducing cost or something like that is totally off the table, which is great. We get something off the table.

I think quality is a little bit more of an area where, you know, it takes a long time to get consensus on that because, and this gets to many of the applications where people feel very good about them because what they do is they actually support clinicians moving towards a standard of care and, you know, if they don't do it, they're just back to the way they practiced before versus something that's, you know, trying to achieve a specific clinical outcome. Richard.

DR. KATZ: Yeah, Richard Katz. Well, I think it may be easier to decide off the table, on the table, is going back to whether there's a level -- what the level of risk is and where that dividing line may be, at least tries to help you irrespective of who's using it, whether it's the clinician or the patient or a mix thereof. And so you're looking at the potential for harm, and in that regard, you may need a panel of experts or such, the dermatology group that decides whether or not we're going to be missing things or over-diagnosing things, whether or not a diabetic just following their blood sugars, type 2, you know, it's more of a trend of the blood sugars, not necessarily the absolute at that very moment that makes a difference.

We are not Consumers Reports or CNET. We're not rating which are the best and which are the worst as a blanket kind of thing, but rather those which have some significant risk. So I think we need to go back to our -- like you have with review panels, where there is something that we can define as have a certain amount of potential for risk, and then those are the ones where we can really start to do some regulation.

DR. NILSEN: Wendy Nilsen. I just wanted to go back to the delay to treatment question as a risk because I think one of the reasons many of us get excited about mobile is the opportunity to access populations that have never received service or will not receive service. So many of the apps, as poorly designed as they might be, are reaching people that are not getting service elsewise. We don't know the data on this yet. I mean we're really struggling to develop research around this, but I think we really have to be cautioning about worrying about the delay to service because most people with a problem like alcohol don't get treatment. So if they get an app, it might help them in some ways. I think we have to be very cautious and look at the data and figure that out before kind of putting up a cautionary flag saying, wait, we'll delay. We need data.

DR. DIERKS: So we're getting close to the end of our hour. Do we have questions that have come in? And if you can actually introduce yourself, that will be nice to have for the transcript.

MR. PATEL: This one's on.

DR. HIRSCHORN: Okay. So from this side of the room, I'm Dr. David Hirschorn. I'm a radiologist representing the American Board of Radiology, and I noticed that the way that this topic was listed in the syllabus, in the agenda is as standalone, you know, clinical decision support, and we kind of scratched our heads at that because the American College of Radiology, one of the things that you talk about is who's going to provide, who's going to decide what the clinical decision support should be, where those rules should come from.

Well, in our world, in radiology, we say, well, we are gatekeepers of imaging, and believe the American College of Radiology should be informing, referring clinicians as to which test order, what's the correct indications for what exams and for other ones, which yes, which no, you know, you should not be doing yet. A brain MRI for foot pain, those kinds of things, to know what kind of test to order.

And so the American College of Radiology has had for many years appropriateness criteria that they've published for probably decades now that didn't get used much because it would sit on the shelf, and it wouldn't be very accessible, and what we've done is web enabled them to web services.

But we recognize that the primary way that this is going to be used is not as a standalone CDS. We do not expect a referring clinician to say, hmm, let me consult some other tool to decide what I should order for my patient. No, what makes more is to say as you go to order the test for your patient, it tells you, doctor, this doesn't make sense or this does make sense, and to give them at the point of care, so that it should be plugged into the CPOE, to the computerized physician order entry system, whether that order entry is, you know, mobile based or not mobile based regardless, but it's a plug-in service to it, and it could be made a standalone, but then it would be like it's always been. It wouldn't be used much.

Its primary use case is where it's not standalone, where it is plugged in, into a McKesson or Cerner or whatever, or some kind of CPOE that's out there. That's where it's more likely to actually get utilized and make a difference.

And so the question becomes then how would the FDA look at that in saying how are they going to make sure that this intellectual property if you will, the clinical decision support rules that are being provided from one entity, in this case, the ACR, but being, you know, passed along in the chain into a bigger CPOE, computerized physician order entry system, that's providing them had to make sure how they regulate that because you have more than one player involved in here in order to make this work best way, and how do they view standalone versus integrated or product that can be used in both ways.

DR. DIERKS: So I'll take the first stab at commenting. So you're absolutely right in that. I'm a firm believer that clinical decision support for it to be effective in any way and have any hope of being useful has to be embedded in workflow, and very quickly workflow is increasingly supported by technology. So, you're right. It's just absolutely impossible to actually cut it out of or disengage it from some other technology, and that will be increasingly so.

So I think that's, you know, one of the challenges. I'm guessing and, Kristen, maybe you can correct me. I'm guessing that one of the reasons for purposes of this workshop kind of declaring or carving out that standalone was really more or less to distinguish it from decision support or rules engines that were embedded in more conventional devices, for example, mechanical ventilators or something like that. So --

DR. MEIER: Yes, that's correct.

DR. WHITE: So, David, at the break I was talking to Meryl and Dr. Katz, and my comment to them was standalone is the only place you don't get a 4G signal or something along those lines which is essentially -- that encapsulates what you described.

I'll add a layer of thought to that which is, so AHRQ publishes the ARC appropriateness criteria on international guidelines, clearing house, you know, it's a great resource. It's a real struggle not for the ARC appropriateness criteria not necessarily, but for other forms of, you know, authority organization based, clinically vetted knowledge to make its way into some of these workflow venues and some of the products we've talked about because ultimately somebody, some, you know, genius like Satish here next to me, takes those criteria and turns them into something in the app unless there's good web services interface like you've described. So there's some technical issues to work out there. But then there's translation issues and things get dropped along the way.

Oh, how do we regulate it? I don't know. That's a great question, but it's a great, great point.

DR. HIRSCHORN: To your point, we've seen that happen where somebody will have taken the appropriateness criteria and then massaged it, you know, and then delivered it that way. Well, now who's responsible for it, you know, if it's no longer the original product?

DR. DIERKS: Mickey, did you have a comment?

DR. SEGAL: I just wanted to add that the original conception had been that decision support would be built into each electronic health record, but one of the trends that we've been noticing is that of interoperability among decision support.

As an example of what we're doing with the GeneReviews is a good example of that, but things will go back and forth and people will jump back and forth and, of course, that will be incorporated into the electronic health record, that you'll be able to jump into that whole world, but where it's evolving most quickly is among the decision support resources themselves because all we have to do is agree with GeneReviews, here's our list of 2,000 codes, just hook them up. So it's all happening very quickly because it has to be done just once without a coding in an electronic health record.

So a lot of this stuff is happening among the standalone things, but then the electronic health records are going to link into that whole world, but it's evolving quickly with interoperability among decision support systems that are external to electronic health records, but --

MS. DiCARLO: Do you have time for a quick question? Hi. Jovianna DiCarlo. I --

DR. DIERKS: Let's have the two audience questions.

MS. DiCARLO: Okay. Thank you. Jovianna DiCarlo. I'm the CEO and CIO of the International mHealth Standards Consortium. You know, and I just wanted to address what Dr. Misra was saying about the medical applications, and I have a couple of comments that I'd like to share for the clinical decision support systems.

But, you know, one policy that we might want to consider in issuing guidance as far as for Apple, BlackBerry, or Droid is having those companies report to FDA a registered health and/or medical apps or gaining access to all of the apps that have been registered under that criteria.

There also could be sort of like a pop-up that these companies can enlist on when users or innovators are registering their apps, such as the app you have registered appears to fall under health information or a medical application; apps under these criteria can fall under regulated guidance that you need to be aware of; please click here to determine which regulatory guidance, if any, your application falls under, might be a great way to reach the innovators that are in their garages. Yesterday that we were talking about as how are we going to reach those people to let them know the regulatory guidance really mandates their process procedure workflow SOPs and training.

Also, you might want to develop guidance also that requires that health app developers register their app with FDA and require that they submit any functional or content changes within those applications, especially if they do the research and fall under that guidance that their app is regulated.

You know, I just wanted to also touch upon the clinical decision support, which is really interesting to me and, you know, my question would be in a couple of parts. Wouldn't a hospital system have SOPs for their providers to provide a standard of care? And I could see a system to capture a clinician's decision process for quality and training purposes or competency evaluation, or as a means to identify if any additional training might be needed, but I was just wondering, you know, how would such a system be validated, and it seems like or reproducible in that it seems like each system or clinical systems have their own policies, procedure workflow according to that particular hospital system, and then I wondered also in my second part of my, you know, thinking about clinical decision is would it be more beneficial for a specific therapeutic indications or for say universal end users such as an epidemiological management tool, where we have a nation responding to a certain outbreak, and we have containment patient care, certainly with some epidemiological outbreak or foreign disease that we're not really, you know, used to treating in this part of the nation. So just a couple of considerations there.

And then I know FDA cannot and does not, you know, look at the cost issue, but one of the things I look at in clinical decision support is, you know, with some hospitals, could some manipulate these systems in order to save money or in order to, you know, get the highest reimbursement rate? You hope not but, you know, how would you be able to monitor that?

And I look at like, you know, the HMO organizations that had their policy procedure and workflow in place for which, you know, test could be reimbursed or what can be ordered and that sort of thing, and we look at, you know, sort of what happened there. Their own policies and SOPs, you know, mandated them of what they could approve and what they could not approve. So that's --

DR. DIERKS: So let me respond to a couple of -- you've brought up some really good points, and I'm going to sort of make two quick comments and then see if anyone on the Panel --

So the first is -- so the interesting thing about cost is there is actually again sort of a slippery slope because there is decision support that will present different options and one which has a minor reduction in sort of efficacy but is significantly more cost-effective. That was sort of the cost element there that I was talking about. So it is a little bit nuance.

But just to respond to you, I mean hospitals do this all the time. They're constantly actually creating rules or constraints in their order entries, in their information systems that actually shape and drive and push people to do clinical care in a certain way and part of that is that, you know, there is a lot of fuzziness to the way we do care. Increasingly some standards but still a lot of fuzzy areas.

But, you know, you brought up some questions that I want to pose back to either Bakul or Kristen, which is what is the FDA's plan on an individual institution developing their own order sets, their own decision support rules engines or rules with again no intent to distribute those, no intent to sell them or, you know, really overtly make any money from them? They're really just for internal consumption because that's a little bit different. It's sort of along the lines of if I'm a surgeon and I actually develop some kind of tool, I can actually use it on my patients in my operating room as long as I don't reproduce that, sell it, manufacture it, or do it in a way to make money or market it.

MR. PATEL: I think that's an area as IT gets into the practice of medicine and crosses, the line gets blurred between device, the practice of medicine and actually commercialization. I think that's been discussed quite a bit within the Agency, and legalistically, there are some bright lines. I think it would be unfair for me to sort of go off the cuff here and announce, you know, this is what we're going to do in going forward.

DR. DIERKS: But you can say that it's not something that's been overlooked.

MR. PATEL: Yeah.

DR. DIERKS: Just so people don't go away from this workshop feeling as though that's been totally, you know, an important point that was overlooked.

MR. PATEL: Yeah. So the sequence of that or sort of past decisions of that, I'll just give you a couple of examples. Device reprocessing has been a historical thing that FDA has done when the hospital started reprocessing medical devices like tools and scissors and scalpels and whatnot in their own facility. FDA has taken a stance that even though you're processing, you need to still meet the same sterilization standard that the external sterilization processor do with those medical instruments. So just a historical perspective of that.

So if there's any -- to that, I think that will be used as a precedent, but that's where I want to leave it at.

DR. DIERKS: We did have two questions come in. One is directed to Dr. White. So, Wendy -- oh, wait. I think it was -- yeah, the question came up on expanding on the concept of learned intermediary. I mean that's been sort of an issue or sort of a framing concept for a long time about, you know, does that actually in and of itself constitute a sufficient mitigation, and so maybe I'll reframe or nuance the question that came from the audience a little bit, and that is where would the learned intermediary not necessarily be sufficient in terms of it being a mitigator to the risk of a particular decision support?

DR. WHITE: With the understanding that I've tossed the term around quite liberally but I don't actually understand the complete provenance, okay, of the term learned and I'm sure there is a provenance. I'm sure we can dig it up.

I don't know. It feels to me the -- of, you know, when does it matter versus when does it not is ultimately when is the learned intermediary the final check through which the decision goes? Okay.

Now, you can start to push that boundary pretty closely. For example, calculation of irradiation, field and irradiation dosage, right. It gets pretty damn complicated when you're talking about the modeling and how many centigrays are going here and how many are going there, and you do it based on 3D modeling software that, again, I couldn't possibly do in my head. You know, the upside of that is that the fields are much more complex these days compared to 30 years ago and the consequences that the radiation is really optimally targeted, as long as you're assuming you've got correct, you know, modeling of the tumor area and that the tumor really does get the maximum dose and the rest of you does not. That's something that if I were a radiation oncologist, I couldn't possibly calculate in my head, and therefore I've got rely on the software to be able to do it, but ultimately if I were a radiation oncologist, I'd have to say, I'd have to look at the field and say, yep, that makes sense, and sign the order to push the button and make that happen.

So, you know, ultimately whoever is the licensed professional that is the learned intermediary, again whether it's a doctor or a nurse or, you know, whatever, that is the final checkpoint for, you know, the intervention happening, that's where it makes a difference, I think.

DR. DIERKS: Question from the audience?

MR. THOMPSON: I'm Brad Thompson. My question is for Stan. Stan, you described your products, but you didn't share anything about your experience with FDA. Could you let us know, is your product regulated by FDA? And if so, how's that gone? If it's not, why not?

DR. PESTOTNIK: The product isn't a regulated by FDA, and I guess the reason is naivety, ignorance on our part.

DR. DIERKS: We'll hold it. I've got another question that came from the audience. All right. So I'll read the question and, Satish, maybe this might be best directed to you initially. How is a mobile application, and again I'm going to say clinical decision support, it doesn't really matter whether it comes as a mobile app or embedded in something else, with the ability to sort of track treatment, side effects, and push or encourage a doctor to engage in a discussion, how is that a greater risk than say a WebMD, where you can enter the symptoms and receive a suggested diagnosis? How would you differentiate that?

DR. MISRA: So specifically with WebMD, that's really just taking a list of symptoms and matching it up to a database and coming up with a probability of what the diagnosis is. I think there's more complex and more nuance to decision support, and I mean there are equivalent examples, take the medical calculators for risk for PE or other things. I mean those are all available on the website, and I mean there's really no substantially different risk between the two.

But I think that gets back to the question of when you're talking about that level of decision support, is that something that falls under this bucket of having a substantial enough risk to warrant any sort of extensive regulation? And it may be that the things that are easily translatable from the website to a mobile app are generally not the things that carry the level of risk that we need to be regulating heavily, but with WebMD versus their essentially identical app or a medical calculator and the essentially identical app, the outcome is essentially the same. You have a piece of information that fits into an overall puzzle, and it's up to the clinician to do something with that.

MS. BLOOMROSEN: Meryl Bloomrosen with AMIA. Getting back to the audience question about if Stan's product is regulated, I think it might be interesting or informative if someone from the FDA could tell us if any clinical decision support products or software have been -- if FDA has been asked to regulate it or has anyone come forward to the FDA to put its blessing on it or identified it as something subject to your oversight?

And then with respect to the second question that you just posed, it gets back to I think our questioning whether or not the delivery method of the clinical decision support information is an appropriate dividing line between that which would be subject or not to FDA oversight and a regulation, whether it's up in the cloud or on the web or on my desktop or on my phone. So that would be one of our questions for further discussion and hopefully for a resolution, but I would like to hear from someone about the other question about software being --

DR. MEIER: This is Kristen, and the answer is yes, we have absolutely seen them. Most probably have come in as Class II, 510(k)s, although some have been -- I think the regulatory path isn't clear, you know, whether these are de novo 510(k)s, whether they should be PMAs. We are struggling with this. We know that there's a lot out there that we think should come in that isn't coming in, and again I think that's why we're having this workshop, to help figure this out.

I mean on the other hand, we don't have the resources to review every CDS that is out there either. So we also need to think which are the most concerning CDS systems, but the answer is, yes, we have seen some. You can -- some, I don't know how you would search our databases to find out what those are because they might not be labeled as CDS, but again, ones that we do feel provide support have been there, and again, I do think there's a lot that's out there that hasn't come across our door.

DR. PESTOTNIK: This is Stan. I would just add a little more to my very short answer. We've struggled with clear guidance as to whether we are a device. We've paid consultants quite a bit of money to tell us whether we are and, of course, they don't give us clear guidance. What guidance they gave us was is to put in place a FDA-ready quality management system with CAPA, et cetera. We're in the process of doing that, and as the guidance becomes more clear, we'll register, and do what we need to do.

DR. DIERKS: So I think that's kind of a good area to stop in, and I want to thank the Panelists for a great discussion and thoughtful contributions. I hope that FDA takes away from this some good information, and thank you, Bakul.

MR. PATEL: In healthcare, reducing costs, improving outcomes and all the benefits that clinical decision support, we recognize that. I think clinical decision support and the tools that come along with such software is important for us to make progress.

Many of you guys heard Todd Park speak about this, the CTO at HHS. He would say liberate the data. I think we have so much data being generated right now, from medical devices and other factors, there is going to be a need and there's going to be an obvious place for some sort of clinical decision support to happen.

As you saw the confusion and you saw the sort of dilemmas as we talked through the Panel, it's important for not only the Agency to be clear on how to provide that guidance but also have that clarity in mind when folks like Stan creates a piece of software. Where does it fall? And that's what we are trying to determine is we're trying to determine how can we provide that guidance, just like we did on the mobile apps, as a starting stepping stone, is that what's included, what's not included? Well, we would not be interested right now, we would not be interested later, or what's absolutely not within the purview or would not be a concern to FDA?

I think that's the goal here, and in order for us to sort of get there, we sort of needed to have this discussion to even start talking about how does landscape begin? What are the factors? I wanted to hear from all these different perspectives, which is really, really great for us to sort of not only educate us -- there's folks in the audience here from FDA who probably have not been exposed to some of this, or maybe exposed to only portions of this. Get in one forum, I'm sure we didn't hit all of it, but I'm sure we got most of this discussion in terms of the factors, in terms of understanding what's concerning. If I ask an individual clinician one thing about does that concern you, I'm pretty sure he or she would say, yes, it concerns me here but not here, but if I go and ask another clinician, he would probably say a different answer.

So I think that goes around for everybody in this room, and it's a reflection of the complexity of the topic itself.

I really appreciate the Panel for teasing that out and providing us food for thinking process internally to the team to understand how we should look at this. We always see, like Kristen mentioned, we see things which are very high risk and I mean, for example, radiation treatment therapy has gone through FDA. If you go search our product database, you will probably see that. We have a regulation for drug dose calculations. So it's already been there to some degree. Do we have everything in there? Probably not. Do we have a consistent, clear way of saying what falls in and what falls out? That's really what we're trying to achieve here, is trying to figure out a rational way to communicate this to the exploding area of entrepreneurs and innovators trying to create, sense sort of the data and help clinicians making the right decisions, ultimately going back to the, you know, I keep saying patient care and patient safety. So that's really the goal here.

Great discussion. There are two people signed up, and I'm not sure if Jovianna actually spoke already or she was -- that was part of that presentation, but Mark Jeffrey and Jovianna DiCarlo had signed up in the open public comment session. We'll make that happen now since we're a little bit early, and I'll wrap up this workshop with my closing remarks in a few minutes. So, Mark, if you're in the audience, please feel free.

MR. JEFFREY: Good afternoon. My name is Mark Jeffrey. I work for the TeleMedicine and Technology Research Center, TeleMedicine and Advanced Technology Research Center or TATRC. TATRC is part of the United States Army, falling under the Medical Research and Materiel Command out of Fort Dietrick.

We currently provide research and oversight for over 750 programs and nearly $500 million. These projects include applications such as Text4Baby and ULTA Mobile Prime. We work with programs dealing with clinical decision support and mobile health. We work with organizations such as Healthwise and Mass General. You've heard all these names before in the last two days.

In the past two years, we've established an early stage platform for advanced R&D. This platform provides a virtual environment where developers can come in and work on applications. We have four different areas of concentration within this laboratory. For military, there's combat casualty care. For the nation, we support the Nationwide Health Information Network. We also have mobile health and ULTA.

One of my roles has been to formally establish the mobile health laboratory. That's why I'm here today. I appreciate that everyone that's here today has stayed around for the end of this conference, and I know that I'm between you and the door. So I'll make my comments brief.

We're still considered a new group, and yet to be good stewards of the taxpayer dollars. We're looking to grow by gaining partnerships as opposed to new hires. We've recently created a special interest group for mobile applications, or SIGMA. This serves as a switching station for mobile application development, or MAD, efforts. I like to refer to my developers as mad scientists but not to their face.

I'd like to express my appreciation to Bakul and the FDA team for putting this on. This is a great way to bring the people that are interested in mobile applications and specifically the regulations that govern them together in one forum. I really look forward to opportunities like this in the future.

The group SIGMA is formed as an advisory panel, not necessarily a group on policy or regulations. We defer those questions and those discussions and decisions to the FDA, NIST, and other appropriate organizations. What we like to be is a clearinghouse and a collaboration point for people like you that are working on mobile applications and want to be part of the solution so that we together develop a solution that's going to interoperate and work well in the future.

So I look forward to other MAD projects that we can work on together, and as TATRC continues to support Government, academia, and commercial partners, we look forward to seeing you in all of those. Thank you.

(Applause.)

MS. DiCARLO: Hi there. Jovianna DiCarlo, the CIO and CEO of the International mHealth Standards Consortium.

I just wanted to briefly tell you about our organization. It is a nonprofit organization where you as innovators and medical device makers can go as a resource. It's imhsc.org. When we look at mobile health, mHealth, medical device development, we know that this is a global effort, and that this is not just an effort of Health and Human Services, FDA, but it's also an effort of joining together many regulatory bodies that oversee all of the components that make up mHealth, and this includes Internet, FTC, FCC, and all of the other regulatory bodies combined where we all come together.

It's interesting, you know, on our side with FDA, in standards for EMR and EHR, we do have a monetized standards set up where HL7 and CDISC are our standards bodies that help us to adhere to the regulatory guidelines on our respective technologies, and so what they have done within industry is they realized that they had to come together, and they've done that, and they're working together now currently. And then we have the convergence also of the other regulatory bodies that have open standards at the Internet and the mobile telecommunications.

And so being a global joint initiative, so the imhsc.org has many links of resources of other global regulatory agencies and what their requirements are, if you want to commercialize your technology if it's certainly outside the U.S., and so it might be helpful for you. So I wanted to let you know about that.

And thank you so much, Bakul, for this. This has been wonderful.

MR. PATEL: This is the end of the workshop. So I'm sure people are lining up with their suitcases ready to catch planes back to their hometowns and maybe downtown here to catch the traffic.

I just want to just close with a couple things that I observed for the past day and a half and sort of leave you with what I've heard, and it's not everything. It's just very high level, the points that I've heard.

In Session 1, we heard communicating with folks who are not familiar with the regulations is a key, and I leave you with that thought, and I think that's key, and I had an inclination, we wrote the guidance towards that objective. I think we struck a balance there. I'm not sure we got it completely, but I think as the Panel in the session pointed out, I think that we could do a little bit more with the communications and helping folks who are new to this sector or industry to sort of understand how regulations play.

And I do want to leave one thought with most of you. I'm hoping there are still some people who are not familiar with FDA in the audience, that the guidance we published is not a regulation. I use the word oversight on purpose in all my talks because not all devices require premarket clearances. Some devices are just, depending on the risk of the devices, they are Class I, Class II, Class III, and there are different risk classifications and they have different requirements. So not everything needs to go through FDA clearance or approval before they're in the marketplace.

So just keep that in mind for folks who are not familiar with the FDA regulations.

Session 2, accessories. We talked about that yesterday as well, an already complex area as commercialization of medical devices and commercialization of users in healthcare changes. I think that's an interesting topic and an important topic for FDA to provide a rational way to move forward, provide guidance to the industry who are being affected and guidance to people who are innovating in this area and also for patients at the end of the day, to have that same level of confidence that they used to have in the past with the traditional medical devices.

Intended use came up many times. It's important; I think it came up in Session 1 as well as Session 2. I think it boils down to the claims that folks make and how do you lead a user, a purchaser, a buyer, or ultimately the person who's making clinical decision? I think that's important. So I'm talking at a very high level on where this is coming from, but it is important. We recognize that. I think the folks who listened into this conversation for the last day and a half do recognize that as well.

Clarity in that, for us to look at somebody's intended use and go, yes, this is what you're claiming is important a well. So Kristen pointed out this morning about that.

That takes us to clinical decision support. We talked about it all morning, about what this means and what it doesn't mean. It's a complicated area. I think we will continue to sort of explore what that definition sort of bounds us or takes us to a place, and having that definition in mind, come up with some rational way. So our goal in the end is to have a rational approach which would be useful and definitely not stifle anything in ways of creating better healthcare or better support in making decisions for patients or clinicians at the same time. So I'm going to leave that thought with you.

So next steps, I'm sure people are wondering what the next steps are. Next steps is the comment period. I've said this. I feel like an infomercial now, but October 19th is the deadline for submitting public comments on the draft guidance, including these two topics we talked about, accessories and clinical decision support. Please provide them to us. I think it will help us shape the next steps that we will take.

The Session 1 discussion about the scope of the mobile apps guidance and the approach we have taken, I think that's important for us to get feedback on as we finalize this guidance, and then prepare for the accessories and clinical decision support guidance, we can get our input together before we actually make any proposals out there.

So the process will be as we will do a similar thing for these other guidances. We will put out a proposal as a draft and then seek additional public comments and reactions to what we put out. So I think there's opportunity for the public to engage in this discussion and coming out with a better solution at the end of the day.

So I thank you very much for being here and participating in this frank discussion. I'm hoping that this will leave you thoughts and considerations in submitting those comments which are useful for us at the end of the day. So thank you and have a great day.

(Applause.)

(Whereupon, at 12:35 p.m., the meeting was adjourned.)

C E R T I F I C A T E

This is to certify that the attached proceedings in the matter of:

MOBILE MEDICAL APPS DRAFT GUIDANCE

September 13, 2011

Silver Spring, Maryland

were held as herein appears, and that this is the original transcription thereof for the files of the Food and Drug Administration, Center for Devices and Radiological Health, Medical Devices Advisory Committee.

____________________________

CATHY BELKA

Official Reporter