Workshop Transcript One - Public Workshop on Current Status of Useful Written Prescription Drug Information for Patients. February 29 - March 1, 2000, Rockville, MD.
DEPARTMENT OF HEALTH AND HUMAN SERVICES
FOOD AND DRUG ADMINISTRATION
CENTER FOR DRUG EVALUATION AND RESEARCH
Transcript of a Public Meeting:
CURRENT STATUS OF USEFUL WRITTEN PRESCRIPTION
DRUG INFORMATION FOR PATIENTS
Tuesday, February 29, 2000
Double Tree Hotel
1750 Rockville Pike
C O N T E N T S
Moderator: Nancy M. Ostrove, Ph.D.
Welcome: Nancy Smith, Ph.D.
Why Are We Here: Thomas J. McGinnis, R.Ph
History of Recent Private-Sector Information Efforts: Judith A. O'Brien
Background on the Patient Information Project: Karen Oster
1999 Patient Information Assessment: Bonnie Svarstad, Ph.D.
P R O C E E D I N G S
DR. OSTROVE: Good afternoon. Welcome to lovely downtown Rockville. We are really happy to see you today. I am Nancy Ostrove, Chief of the Research and Review Branch III within the Division of Drug Marketing, Advertising and Communications, in the Center for Drug Evaluation and Research -- and if you got all that, you are better than most reporters that I have spoken with recently.
We are so pleased that you have joined us here today to hear about the FDA sponsored interim study of the Private Sector's Prescription Medicine Information Program for Patients -- I mean, that is what I have written down --
-- and I wrote it! But we really are happy to see you here. You know, I was here about half an hour ago; it was kind of thin and I was thinking oh, no, don't tell me you aren't interested in this anymore. I can't believe it. And, in fact, that wasn't the case. So, I am really happy to see you here.
I hope you will be able to stay for the full workshop, which is this afternoon and tomorrow. If you have only planned to be here for today, I would like to put a bug in your ear. What I would like you to do is to think about staying for tomorrow's small group breakout sessions. I am suggesting this because I think it would be helpful for both you and for the process. For you, I think it is helpful so that you will get a really good sense of other people's perspectives. In terms of the process, well, your staying and giving your perspective will ensure that others are aware of your perspective as well.
Now, let me assure you we are going to read all the comments that are going to be submitted to the docket. As you know, we published a Federal Register notice with questions. We asked that comments be submitted to the docket concerning this, and we are going to read every single one of them in great detail and analyze them, the way we always do, but even though those comments are in the docket, they are not as accessible to other people. Then, there are perspectives that we expect that you will all be verbalizing tomorrow. So, that is kind of the reason.
One thing -- you know that you can never avoid is glitches. There is one glitch that I wanted to go over with you about just so that when you go through your packet you are not surprised. In the packet that you got when you registered today there is a lot of material. There are excerpts from the action plan. There is the interim report. There are a couple of separate pieces of paper on the left-hand side: one is the agenda; one is the list of questions that we have asked. You also have a copy of Public Law 104-180 in there.
You also have something that lists ten criteria. Those are the criteria that were derived from the action plan that kind of served as the basis of the assessment. If you count the bullets, you will see there are only eight bullets. Oops! We kind of left out the last two. That was not on purpose, and that was not to imply in any way that those are not important because they are important. Specifically, the additional criteria -- by the way, they are there in full in your packet with Dr. Svarstad's slides, on page six -- the additional two are information that is legible and readily comprehensible to most consumers, and information is up to date and timely. Clearly, these are important criteria. That was just one of those little glitches, and if that is the only glitch that we have to deal with over the next couple of days I will be eternally grateful.
So, speaking of non-glitches up to this point, I also would like to thank everyone who has been involved in putting together this workshop and, specifically, Ellen Tabak and Marcia Trenter and all of the people at OTCOM who have just worked so hard at getting it all together.
So, with that in mind, what I would like to do is introduce Dr. Nancy Smith, Director of the Office of Training and Communications within the Center for Drug Evaluation and Research. Dr. Smith's training and early FDA experience has been as a statistician. However, as you can tell from her current position, in the past few years she has greatly expanded both her interests and her responsibilities. Dr. Smith served on the FDA task force that evaluated the system for managing the risks of FDA-approved medical products, and she is heading the Center's effort to expand communications, promoting a better understanding of the safe use of medicines.
The Center, of course, believes that this private sector initiative to ensure that patients who receive new prescriptions and also get useful information about their prescriptions is an important piece of this overall effort to encourage the safe use of medicines.
After Dr. Smith's remarks, what I would like is for Tom McGinnis to take the podium so we don't have to have any popping up and down constantly between people. Tom is Director of Pharmacy Affairs within FDA's Office of Policy, and provides agency liaison to health professional associations. Capt. McGinnis -- oh, you are not in uniform today -- works on issues of interest to both patients and pharmacy. Most recently, he has been extensively involved in the agency's examination of Internet pharmacies.
Tom is no stranger to most of you in this room, having been involved for many years in FDA's efforts to ensure that patients receiving new prescriptions also get useful information about their medicines. Tom is going to revisit a bit of the history of FDA's patient information related efforts, kind of as a refresher in the context for the question of why are we here today.
DR. SMITH: Thank you, Nancy. I would also like to take this opportunity to welcome you to this meeting. The communication of accurate and unbiased information is extremely important to CDER and to the FDA. It is very important to us that patients can be more knowledgeable about their prescriptions.
One of Dr. Jane Henney's first actions after being confirmed by the Senate as commissioner of the agency, about 16 months ago, was to form a task force to look into drug safety. The result of that effort was a report which was published in May of last year, and that report is available on the FDA website if anyone is interested.
The report is titled, "Managing the Risks from Medical Product Use." The report discusses risk management practices within the overall healthcare delivery system, focusing on the roles and responsibilities of each component.
Risk assessment, which is the agency's primary role, was discussed both premarket and postmarket. The approval of medical products, as we all know, requires a demonstration of efficacy and safety. However, approval does not mean that a product is without risks. All drugs, all medicinal products have risks. A safe product is one that has reasonable risks given the magnitude of benefit that is expected and the alternatives that are available.
The task force looked into various ways to manage these inherent risks. All participants in the medical product development and delivery system play an important role in maintaining this benefit/risk assessment.
Our task at this meeting is to discuss an important part of that delivery system, the written information that a patient receives with their prescription. Consumers in the United States want to actively participate in their healthcare decisions. They need to know about the potential benefits and the potential risks of the products so that they can make informed decisions based on their personal values. The evaluation of the usefulness of the information they receive is of utmost importance to us, and we really appreciate your interest and involvement in this area.
One other thing I wanted to point out while I am up here -- when you registered you also received, or you should have received a small packet of information about another upcoming workshop that we would like to invite you to. This workshop will be held in late March, at the Bethesda Hyatt, and it is being co-sponsored by the FDA and the National Patient Safety Foundation. The title of this workshop is "Safe Medical Treatments: Everyone has a Role." It is designed to be a forum for consumer organizations and patient organizations to talk with us, to have a role in discussing how we can help consumers become more involved in their own care. We encourage all of you who are interested in this to register for the workshop. I know there are a couple of people here who are involved in the workshop already, and we appreciate your participation there and here.
So, again, thank you for coming and I look forward to what we will hear during the upcoming two days. Thank you.
Why We are Here?
MR. MCGINNIS: Thank you for the introduction, Nancy. I bring greetings from Dr. Henney who wanted to be here with us today but, unfortunately, the Senate scheduled our appropriations committee meeting today. She and other top agency officials are down there all day today answering questions. So, I have been asked to set the stage for our discussions over the next day and a half on patient information issues.
First, let's quickly review 32 years of FDA involvement in patient information issues. It began back in 1968 when FDA first required warning and risk information on isoproterenol inhalation products. This was the first prescription medication that required consumer-oriented written information.
Two years later, in 1970, FDA began requiring information for consumers, in response to concerns of women's health advocates and new data, on the potential for long-term effects related to oral contraceptives and other hormone-based products.
In 1979, FDA proposed a rule that would have required prescription drug manufacturers to produce and distribute FDA-approved written information, known as patient package inserts or PPI's, for ten classes of drugs. This rule was withdrawn in 1982 with assurances from the private sector who would be able to give consumers needed information about their prescription drugs.
When FDA took up this issue again in late 1991, as most of us in this room already know, inadequate access to useful patient information was a major cause of inappropriate use of prescription medications leading to unnecessary emergency room visits and hospital admissions. In 1995, FDA estimated that the cost of these hospitalizations was 20 billion dollars annually. Others estimated the costs as high as 77 billion dollars in 1995, which was the same amount of money that the U.S. spent on prescription medications in 1995. So we were spending one extra dollar for every dollar we spent on prescription medications for those misadventures that were caused.
Today we are finding more and more Americans are using prescription drugs. In 1995, U.S. consumers were purchasing 2.1 billion prescriptions per year. Prescription drug usage has continued to increase yearly with almost three billion prescriptions being written in 1999 and that number is projected to reach four billion by the year 2004.
As you have seen from FDA studies over the years and we will hear shortly from Dr. Svarstad, the rate of distribution of written prescription drug information materials has increased steadily over the last 15 years, however, the quality and understandability of such materials has been, and continues to be, variable.
Why are we here? In the Federal Register of August 24, 1995, FDA published a proposed rule that aimed to increase the quality and quantity of written information about prescription medications given to patients. In the proposed rule, entitled "Prescription Drug Product Labeling: Medication Guide Requirements," FDA encouraged the private sector to develop and distribute patient-oriented written information leaflets for all prescription drugs, and to set targets for the distribution of these leaflets. In addition to setting target distribution goals by specific dates, the proposed rule set criteria by which written information would be judged to determine whether it was useful and should, therefore, count toward accomplishment of the target goals.
On February 14 and 15, 1996 FDA convened a workshop on the medication guide proposal, right here, in this same hotel.
In August of 1996, Congress passed, and the President signed, Public Law 104-180 mandating that the private sector be given the opportunity to meet distribution and quality standards for voluntarily provided written information on prescription drugs. It also directed the Secretary of Health and Human Services to facilitate the development of a long-range action plan that meets the goals through private sector efforts.
The Secretary asked the Keystone Center to convene a steering committee to collaboratively develop this action plan. Judy O'Brien from the Keystone Center will describe that process shortly. The action plan, accepted by the Secretary in January, 1997, reiterated the target goals specified in the federal legislation.
These goals were that by the year 2000 written information would be distributed to 75 percent of individuals receiving new prescription medications, and by the year 2006 95 percent of individuals receiving prescription medications would be given useful written information. The action plan generally endorsed the conceptual criteria specified in the public law for determining usefulness of medication information.
Specifically, the law stated that such materials should be scientifically accurate; unbiased in content and tone; sufficiently specific and comprehensive; presented in an understandable and legible format that is readily comprehensible to consumers; and that it should be timely and up to date, and useful, that is, it enables the consumer to use the medications properly and appropriately, and they receive the maximum benefits of the medication and avoid harm. This action plan, including descriptions of the criteria, has been available on the Internet ever since the Secretary accepted the plan in January of 1997.
Also, consistent with the public law, the action plan called for the development of a mechanism to periodically assess the quality of written information for patients.
To test the methodology for collecting patient information materials and assessing their usefulness, FDA entered into a contract with the National Association of Boards of Pharmacy on September 25, 1998. As you will hear shortly in more detail from Karen Oster from NABP, the contract called for the selection of several state boards of pharmacy who would arrange for collecting, from a sample of pharmacies in their state, medication information materials given with new prescriptions for three commonly prescribed drugs chosen by FDA.
The contract also called for the development of evaluation materials to assess the usefulness of the information through application of the Keystone action plan criteria. NABP arranged for this work to be done by Dr. Bonnie Svarstad, professor of pharmacy at the University of Wisconsin. The medication information materials were collected by participating state boards in 1999, and the final report from the evaluation of these materials was completed on December 21 of 1999.
Later this afternoon Dr. Svarstad will discuss in detail the findings of the interim study of the status of useful written prescription drug information for patients and how the criteria specified in the action plan were employed.
What FDA would like to get from this public meeting, following the presentation of the study methodology and results by Dr. Svarstad, is feedback from you on the methodology and the results. This feedback will then be used for development of the assessment mechanism that will be used by the agency to see if the year 2000 goal has been met.
Specifically, FDA is seeking comments on several issues and you have a copy of these seven issues in your folder that was given to you. The first two we consider extremely important.
The first one, what should the minimum standard or threshold be that must be met for written information to be considered useful? In other words, what is a passing grade?
The second one, should certain criteria derived from the action plan recommendations be given more weight than others? If so, which criteria should be weighted more strongly than others, such as should the risk information be weighted more heavily than the storage information found in these leaflets? In the breakout groups, we will ask all groups to consider those first two.
The remaining issues will be divided up amongst those groups. We will let those groups choose which ones they want to address. The first two we are asking every breakout group to address. Those two are the most important feedback information that we are seeking.
The third one, are the evaluation forms an accurate translation of the action plan's criteria? Did the researchers take the criteria and the sub-criteria and come up with statements that they looked at in the materials and applied them properly?
Should the assessment include additional criteria or types of information and, if so, what?
Next, should there be a more detailed assessment of factors affecting readability and legibility for consumers, again, type size, style, spacing and contrast? As you will see later, most of that information is in 10-point type size.
The next one, should the evaluation panel include consumers? As you will see in the discussions later, they were mostly educators and a practitioner. There were no consumers on the evaluation panel. And, what backgrounds should these consumers have if they were to be a reviewer of this information? How should they be involved in the evaluation process? Should they be an equal panel member?
Next, this report collected information from U.S. retail pharmacies where one could walk in and present three prescriptions. Nothing was collected from the mail-order side of pharmacy business or from those non-retail pharmacies such as the pharmacy benefit managers. How do we collect information from those? We will ask for feedback from there.
I want to thank you for joining us for this important meeting, and I look forward to working with you over the next day and a half to address these and other important issues. Thank you.
DR. OSTROVE: I need a show of hands at this point, how many people are suffering from postprandial drowsiness? Nobody! This is very good. I saw one hand. That is very brave, Linda, thank you very much.
Now that you have a sense of why we are here today, I thought it would be helpful for you to get kind of a better idea of how we structured the workshop. So, just very briefly. Basically, today is pretty didactic. We have already heard a little bit about the history of ind of the overall FDA involvement in this area. We are going to hear about the history of the interim assessment and, actually, we are going to hear a little bit about the action plan development itself, which will be followed by a report by Dr. Bonnie Svarstad of the study methodology and results.
We will have a break in the middle. So, you know, if you do happen to suffer a little bit from that drowsiness or you need a bio-break, there will be a time for that.
Tomorrow, the second day, armed with your understanding of the study and what led up to it, that is when we want you to talk to us and to give us your feedback, your thoughts. So, what we will be doing tomorrow is breaking into small groups, and tomorrow morning I will kind of explain what the little numbers on your badges mean, which basically has to do with breaking up into smaller groups, and each of these groups is going to have the opportunity, as Tom said, to address some of the questions that he has raised this morning that are in your packet and which you will also find in the Federal Register notice, if you happen to see it, announcing the workshop, and any of the older letters that went out to people who are part of the Keystone facilitated process and to those who were part of the public workshop that was held back in 1996.
So, you may want to keep those questions in mind even today as you are listening to the history and to the study and the methodology because they may help you -- those questions may help you to crystalize the thoughts that you want to express tomorrow in the small breakout sessions. Of course, at the end of the day tomorrow, after lunch basically, we will present the results of the breakout sessions.
So, with that in mind, here is the plan for the rest of today: First, you are going to hear from Judy O'Brien, who is an associate facilitator with the Keystone Center, the group that facilitated the process that resulted in the action plan -- and, I am not going to give you the whole title because it never comes out right. You all know what I am talking about anyway.
Currently, Judy is the Director of the Keystone's Energy Program but, more significant to today's presentation, Judy worked with Keystone during the development of the action plan. Today she is going to discuss the process and some of her observations.
Judy will be followed by Karen Oster, who is assistant to the executive director of the National Association of Boards of Pharmacy, Carmen Catizone -- that is not quite as long as Division of Marketing, blah, blah, blah, but it is getting there. Karen was NABP's project manager for the evaluation and she was the contact point for the expert panel evaluations that were led by Dr. Svarstad of the University of Wisconsin. Karen has been an invaluable resource and member of this effort.
After Karen addresses NABP's role in the effort and why they were selected as the contractor, she will introduce Dr. Svarstad. Dr. Svarstad will speak for about an hour and then we will take this break that I referred to which, I am sure, at that point will be definitely well deserved and probably needed. Then, when we return, she will finish her presentation.
When Dr. Svarstad has finished with her presentation, I will take questions from the audience and I will direct them to the appropriate individual. We thought it would make basically for a better flow of information if we didn't basically kind of take questions and chop up the presentation. The only limitation is that Judy O'Brien can only be here for the early afternoon, but she has assured us that since Tom was also there at the time and -- what would you say, about a quarter or about half the people here were there as well, probably they would be able to answer any questions so we should be okay.
Now, for those of you who leave before we get into the question and answer period, please make sure that when you return tomorrow -- because I am going to remind you about this again -- you bring the report and the questions that the FDA has presented that are in your folder. You know, we will have extras but not if everybody forgets. We will have wall charts up as well with the criteria and with the questions. So, don't worry about it. And, we will even have facilitators for your groups. So, Judy, you are up.
History of Recent Private-Sector Information Efforts
MS. O'BRIEN: Thank you, Nancy, for that introduction.
My name is Judy O'Brien, as has been mentioned, and I am a facilitator for the Keystone Center in our Washington, DC office. I am here today to help provide a little bit of context and background on the process that led up to the development of the Action Plan for the Provision of Useful Prescription and Medicine Information -- one of our longest titles, I think, for one of our reports.
I was a member of the four-facilitator project team that worked with this group of stakeholders in this short, very intensive process. When I was asked to do this I got immediately nervous because it was 1996 and my memory doesn't necessarily go back that far so, hopefully, I will have recreated the process accurately and, if I have gotten something wrong, please feel free to let me know that but not right now.
I have structured my comments today into five general areas. First, I am just going to provide you with a brief background of the Keystone Center for those of you who aren't familiar with our organization. Then, a little bit on how we became involved in this effort; then some information on the convening process and how we pulled together this 34-person steering committee; and some information on the steering committee process itself and some of the outcomes that came of it; then just some general impressions that I had as part of the facilitation team.
So first, who is Keystone? Keystone is a non-profit consensus-building alternative dispute resolution organization. We are based out in Keystone, Colorado and have a small office here, in Washington, DC.
We have two divisions in the organization, the science and public policy program, which is our facilitation branch or facilitation arm, and the Keystone science goal which is hands-on science-based education for students and teachers, and they are out in Colorado as well.
Our role is to be the neutral third-party convener of multi-party stakeholder processes. In other words, we consider ourselves to be sort of the process experts to help bring folks together to look at public policy issues and try to resolve them through dialogue and consensus building -- so, kind of in a different way than you would normally approach such issues. Our outcomes vary according to process but generally typically result in consensus-based policy recommendations for policy makers and stakeholders as well.
So, why was the Keystone Center involved in this process? How did we get roped into this very, very intense process from the beginning? HHS had had previous involvement with the Keystone Center through other dialogue processes and were happy with the results of that, so they asked us to facilitate this effort. Our proven experience of bringing together diverse groups to reach agreement was very appealing to FDA and to the Secretary, and it was one reason why they asked us to do this because they wanted to develop one action plan. So, in doing that, you know, they needed a facilitator to help move that process along.
So, as a result of that, the August 26, 1996 FR notice included Keystone as the convener and the contact for this collaborative effort which, as you know, was mandated by Congress.
Our role was to convene the steering committee process, serve as the facilitator, provide organization and logistical services and expertise, and essentially help the group forge an agreement to develop this one, single action plan.
FDA's role was interesting. On the other hand, they actually consciously chose to take a step back from the process and allow the facilitators to work with the steering committee together towards achieving these goals. They were present at the meetings, provided technical support to the group, but did not serve as a member of the steering committee itself.
The convening -- as the whole process was, was also in a very tight time frame; I think we probably pulled it together in less than two weeks. But, the FR notice itself called for a submission of letters of interested parties and included several points of information that they needed to provide to us.
Those letters came to the Keystone Center and basically FDA gave Keystone the leeway to determine who the key stakeholders should be, who were the folks who needed to be at the table, and through a review of the letters and talking to other individuals we essentially came up with a group of 34 stakeholders. And, our definition of a key stakeholder are those that have the sort of ability to stop the agreement from moving forward and, in our minds, those were the people that needed to be at the table. So, that resulted in a group of 34 individuals with varying perspectives but I think a common goal of working together to try to achieve this plan.
The steering committee process itself was very intense, as I mentioned. One hundred and twenty days was what Congress mandated for the group to have but, by the time we actually pulled together the group and had our first meeting, which was September 18, we had much less time than that. I think it was about a 75-day process from start to finish, from the first meeting to the final meeting which in any other type of process would have been very difficult to do. The steering committee met seven times in the DC area, which equated to about every two weeks. This group of folks was torn away from their regular daily jobs and coming to meet in various hotels like this in the Washington, DC area.
In between that, we also had work groups. We had four work groups set up. At the first meeting we set up these four groups that were revolving around the goals assessment, development of guidances and implementation. So, these groups got together in between the steering committee meetings for face-to-face meetings as well as conference calls. They developed a work plan -- you know, a plan of action and how they were going to move forward to try to address the six elements that were required in the law.
The steering committee meetings then provided an opportunity for the work groups to report back to the larger group on the progress that they were making on their draft documents and receive feedback from the group, and that would enable them to continue to move forward with the process.
These were public meetings also, and that provided an opportunity for additional comment and guidance from those folks who were on the steering committee. I believe they were able to provide any comments to the steering committee members who would then relay them to the rest of the group, and that provided some additional insight for the group.
At the end of the process the members were asked to submit letters to the Keystone Center that essentially served as a mechanism to determine their level of support for the plan and who, sort of, signed on at the end of the day.
Outcome -- there were sort of three outcomes that I will just mention here. The first was consensus. The ultimate goal of this group was to develop consensus-based recommendations for the Secretary. And according to the report -- this is the language right out of the report, but members of the committee agree to support the plan as a total package, although individually they may not have an equal amount of enthusiasm for each idea or recommendation. And, that is essentially the definition of consensus that the Keystone Center uses, that is, basically you can live with the whole package but you might not like each piece within it, and that is what we based that agreement on.
Implementation was another outcome, and on the steering committee there was definite agreement that there was going to be a need for support of the plan by the stakeholders in order for it to be successfully implemented. So, within the report there is language to the effect that each organization agrees to commit to supporting the plan's implementation in a manner consistent with its organizational scope and goals. So, everyone wasn't going to do it the same way but everyone needed to have some level of support for the plan.
Then, there was also a recommendation for a transition group to be formed. The purpose of this was to further develop the implementation strategy to address some of the issues that were unresolved or that there just wasn't enough time to address. The specific tasks for that group were outlined in the report.
Some of our overall impressions of the whole process: This really was a unique process from a facilitation perspective. The issues were definitely ripe. The work that was done previously, the years of work on these issues, as Tom talked about, really set the stage for this group to be able to move forward.
There were definitely incentives in place, and that always helps in a process like this if you have such a short time frame. You know, this was a congressionally mandated effort. There was support from the Secretary of HHS. There was a hammer in place in case an agreeable solution wasn't reached. And, the stakeholders really wanted to do the right thing but they had different ways of going about doing it, and they needed something to sort of push it along and to kind of move the whole process forward from our perspective, and they needed this catalyst and I think this process served as that catalyst.
All of these factors really gave Keystone the latitude to help this group reach consensus and submit the plan according to schedule, and it really was an interesting, fun project, as crazy and sick as that might sound from the perspective of those who participated in it, but it was a great group of people who were really committed, and it was a lot of hard work, and I think if it hadn't been for the commitment of the steering committee members to really try to forge an agreement it wouldn't have happened.
So, from our perspective as the facilitation team, it really was a successful effort and it was an interesting experience for us to be involved in. And, that concludes my comments.
Background on the Patient Information Project
MS. OSTER: Thank you for the nice introduction, Nancy. I would like to begin by thanking a few key individuals for their help with this project. A great big thank you goes out to Dr. Ellen Tabak of the FDA, Dr. Bonnie Svarstad and Dr. Dara Bultman of the University of Wisconsin. The project would not have been a success without your extraordinary effort.
I am here today to share some background information of the National Association of Boards of Pharmacy and how we helped to develop and carry out this research project. Our primary interest in this study was twofold: We wanted to ascertain the degree to which patient information was being distributed in regard to the "Healthy People 2000" objectives and, secondly, examine whether the information being distributed was useful to patients.
The answer to these questions continues the interest of the NABP and the state boards of pharmacy in patient care activities such as patient counseling. Earlier studies conducted by NABP found that the offer to counsel and actual delivery of counseling to patients was less than expected and hoped for by regulators and patients. It was our hope that deficiencies in these areas were being met in some small way by the distribution of useful patient information.
We also wanted to help the pharmacists to use the information and increase their interaction with patients in positive ways, directly or indirectly stimulating the interaction and dialogue between the patient and pharmacist even in situations where workload or lack of initiative on the part of the pharmacist arose as barriers.
As Tom mentioned earlier, back in August of 1996 Public Law 104-180 mandating the private sector be given the opportunity to meet distribution and quality goals for written patient prescription medicine information was enacted. This law also directed that the Secretary of Health and Human Services facilitate the development of a long-range comprehensive action plan to meet these goals through private sector efforts.
As a result of this law being put into effect, the FDA asked NABP to appoint someone to a steering committee, made up of interested parties, that would develop the action plan. Carmen Catizone, executive director to the secretary of NABP, was appointed to the committee.
The action plan called for the development of a mechanism to periodically assess the quality of written prescription information for patients. The steering committee thought the appropriate mechanism to assess the quality of the information being provided to consumers by pharmacists should be developed in conjunction with the individual state boards of pharmacy. The rationale reasoned that the actions of the pharmacist of this regard and subsequent monitoring would be based on state laws and regulations. It also followed that the study results indicated that if changes in practice needed to occur, then these changes must be made in concert with state pharmacy practice acts and regulations.
The state boards of pharmacies, colleges of pharmacy and pharmacy trade associations were called on to continue to educate pharmacists about the importance of oral counseling on prescription medication and use. "The Healthy People 2000" goals which were affirmed by the steering committee were to provide useful written information to 75 percent of individuals receiving new prescriptions by the year 2000 and 95 percent of individuals by the year 2006.
The action plan developed by the steering committee called for prescription medicine information to be scientifically accurate, unbiased in content and tone, sufficiently specific and comprehensive, presented in understandable and legible format that is readily comprehensive to consumers, timely and up to date and useful.
After the action plan was developed and because of the contacts the state boards of pharmacy had with the states and NAPB had with the states, the FDA invited NABP to submit a formal request for proposal for the evaluation of prescription drug information materials study. Dr. Bonnie Svarstad, of the University of Wisconsin, was selected as the subcontractor on this project because of her expertise in this area. The FDA and NABP felt confident that she could effectively carry out this study.
On September 4, 1998 the request for proposal was sent in to the FDA. The purpose of the work was to collect a sample of written materials currently being distributed to patients with receipt of a new prescription for selected prescription drugs; then, to evaluate the materials according to a protocol from the criteria outlined in the action plan and to report the results to the FDA and to the public.
The request for proposal outlined time frames and duties to be carried out by the subcontractor, Bonnie Svarstad.
The duties outlined in the RFP will be described in detail by Dr. Svarstad during her presentation.
The request for proposal was accepted by the FDA and became a fully executed contract on September 25, 1998. Ten states were selected for the study. The selection was based on their size and location throughout the United States. The ten states selected had also worked very closely with NABP on other projects in the past.
NABP held its first conference call with the states on November 3, 1998 to discuss the study. Participating in the conference call were New York, Arizona, Florida, Texas, Minnesota, Wisconsin, North Carolina and Ohio. Outlined in the conference call were the three drugs that would be used in the study; how the collection of the data would happen, and the use of state inspectors for same; the cost of the drugs and state reimbursement; and chain and independent pharmacy participation.
After the call, all states were sent a timetable for the project and background information. Svarstad then sent NABP instructions for patient observers, which included "do's" and "don't's" when going into a pharmacy, and answers to questions normally asked by the pharmacist. An observer reporting form was included with this information.
At the end of November Svarstad sent NABP a list of possible expert panelists. At this time, NABP was working with each of the states in an attempt to keep them in the study. Some of the states had to drop out of the study due to internal conflicts and lack of time, resources and staff.
On March 2, 1999 Svarstad sent in an interim report on the study. The report stated that the patient observer visits were complete in the pilot study State of Wisconsin. The report also stated that final revisions to the protocol to be used in the final study were being made. The final list of participating states with contact names was then sent to Svarstad. NABP was working on collecting lists of pharmacies in each of the states at this time as well.
On April 8, 1999 a conference call was held with the participating states, Bonnie Svarstad and Dara Bultman to discuss the instructions for state coordinators and instructions for patient observers. The state contacts had many questions about what was needed from them. The discussion was about list of pharmacies, training of patient observers and the random sampling plan.
After the call, Bonnie Svarstad and Dara Bultman helped NABP and the states with getting the sampling of pharmacies from each state compiled for use in the study.
Another conference call was held on April 19, 1999. The team of Svarstad and Bultman continued to help gather additional observer help, help with medical doctors and identities and random sampling information. Bultman assisted NABP with a lot of the phone work and follow-up questions that the states had regarding the project.
In May, packets of information were sent out by Svarstad to the expert panelists clarifying their role in the study. The states gathered information from June of 1999 to November of 1999. NABP coordinated the reimbursement for prescription and observers' time.
On December 21, 1999 Dr. Svarstad, principle investigator, and Dr. Bultman, project manager, completed their report on the eight-state study. That brings us to today and discussion on that study.
Now I would like to introduce Dr. Bonnie Svarstad. She is currently a professor at the University of Wisconsin, in Madison, Wisconsin, and is a nationally known expert in the fields of social and behavioral pharmacy and patient medication information. She has focused her career on conducting research in social and behavioral pharmacy, with emphasis on professional patient communication, patient compliance and factors affecting drug prescribing and drug utilization. She has published extensively in these areas, has served on numerous national scientific boards and panels, and has won many national awards for her work. Bonnie?
1999 Patient Information Assessment
DR. SVARSTAD: Thank you, Karen.
First I would like to thank Karen Oster for that fine introduction and to thank the workshop organizers for how this was organized. I very much appreciated how efficient it has been, and I am very much looking forward to these two days. I think this project would not have been possible but for a number of people that I would like to thank before getting into the study methodology and results.
I consider it a privilege and honor to have been involved in this project. I have had an interest in this area and a concern, like many of you here, for twenty-some years. So, for me it is a personal honor to be involved at this stage because I think it is so important for the public. I think it is important for the pharmacy profession and the other health professions, and for other components of our society that I think share the concern about getting the most useful information to the public so they can use medications safely and effectively.
First, let me thank my colleague, Dara Bultman, who cannot be here today because she is back home in the communications lab teaching our students. The project certainly would not have been possible without her help as a project manager.
The study, as you know, was done in cooperation with a number of parties, and I would like to begin by thanking them. First, the FDA -- there is no doubt that Dr. Ellen Tabak enabled us to finish this project. Her professional expertise in the area of research, her quiet but firm insistence on deadlines and support, and trying to find the resources to do all of this in twelve months were unparalleled. I personally thank her for a wonderful job in helping us and, of course, Nancy Ostrove, Tom and the others at your office. I think anyone who is an academe always has a little bit of concern about getting involved in a project that has political or social issues involved because there are so many parties involved, but I found that their support and involvement in this study was just superb.
NABP -- I would like to thank Karen Oster and Carmen Catizone. We very much appreciated their involvement and support throughout the project, who made this project one that was very interesting and exciting for us, and helped us to make connection to all the interested parties, the state boards of pharmacy, etc.
They are not here, I am sure, but the state coordinators and inspectors who collected this data -- I wish we had a way of thanking them for all the work they did, but they were a critical key here because they were the ones who went out to the pharmacies and, as much as possible, collected data in an objective and in as fair way as possible. We very much appreciate their input.
Then the expert panelists -- and I would like identify them -- also played a key role in this project in terms of helping to develop the forms that we used, reviewing and commenting on those forms, providing guidance to us in terms of study methodology, and giving us feedback and support all along the way. Their role also was critical and made it possible for us to do this project.
Let me say a few words and identify those panelists because of their critical role here, and I would like you to know a little bit about them, without going too much into their background but I would like you to know this because they played a very important role in evaluating all this information that came flowing into Wisconsin at one point.
Heidi Anderson Harper is a pharmacist and a member of the faculty at Auburn University College of Pharmacy. She also happens now, I think, to be the head of the Social Administrative Sciences Division. She has had a career-long interest in the design and evaluation of drug information materials for the public, has worked with many pharmacies as well as public agencies to try to do a better job in this area.
Robert Beardsley is at the University of Maryland. Bob is very well-known nationally for his work in this area. He has done research in the area. He is a pharmacist, has his Ph.D. and is also an associate dean. Bob and his colleagues, Carol Kimberlin and William Tindall, are authors of one of the most well-known and recognized textbooks on communication for pharmacists. So, he has taken a career-long interest not only in improving information but in figuring out ways to disseminate it to our students so that they are prepared to do this once they get out into practice. So, we are happy to have him involved.
Chester A. Bond is a Pharm.D. He is now at Texas Tech. University. He used to be at the University of Wisconsin. He is a specialist in psychiatric pharmacy, continues to practice pharmacy, and is an associate dean. He is an author of some of the very classic evaluations that have been done to improve or enhance the pharmacist's role. So, he played a key role.
Marie Gardner -- I am sure of you know Marie. Marie is a Pharm.D. and she is at the University of Arizona. She has taken leadership nationwide in figuring out ways to better prepare pharmacists for their role as educators, and has made many innovations in this area. She continues to practice pharmacy.
Carol Kimberlin is not a pharmacist. She is an educational psychologist, and I think would bring a consumer perspective to this panel. Carol is at the University of Florida, at Gainsville, and in the School of Pharmacy. She is a full professor, and has been involved for a number of years in communications research and has done landmark studies in teaching pharmacists and evaluating pharmacists' role as educators.
Duane Kirking has a Pharm.D. and a Ph.D. He is also in the audience. Would you stand, Duane? Thank you. Duane was a member of the panel. Duane is at the University of Michigan, and has a long-time interest in drug information. We selected him also because he is a careful reviewer of materials. I have had experience with that as an associate editor myself, and I thought he would provide a critical perspective. He also has a good understanding and expertise in the area of drug utilization, review and statistics.
Sharlea Leatherwood is a pharmacist, and she was a very good addition to the panel, I believe, and was suggested by NABP. She is from Missouri, community pharmacist and past president of the Missouri Board of Pharmacy. So, she brings another perspective.
Helene Lipton -- Helen brings a consumer perspective because she is not a pharmacist, however, she has been at the University of California in the School of Pharmacy for a number of years, and is now a full professor and has a strong interest and history in health policy studies and, in particular, has an interest in drug use in the elderly. So, she added another perspective.
Betsy Sleath is a pharmacist, has her Ph.D. in social and behavioral pharmacy, is a registered pharmacist, and she is now at the University of North Carolina. She has also done extensive research in communications.
So, we tried to get a panel then that had some linkage to pharmacy practice and some experience and commitment to careful, objective evaluation of communication and drug information so that we would get as many -- I think a strong, diverse panel, if you will, to evaluate this as carefully as we could.
I should also say a thing about our project manager, Dara Bultman. Dara came to us from Purdue. She has a master's in clinical pharmacy, a Ph.D. in social science, but she has continued to practice pharmacy, and continues to practice to this day. I think she has almost over fifteen years of pharmacy practice experience, so brings with her, again, this integration of pharmacy practice as well as the science of trying to evaluate drug information.
I suppose I should say how I got interested in this before going too far, My own background is in medical sociology. I came to pharmacy through the back door, so to speak. I guess my closest connection is that my uncle is a pharmacist, but I did my Ph.D. studies on doctor-patient communication, back in the early '70s.
I think I always remember the day that I discovered that as a non-pharmacist, non-health professional what I was trying to do in my very first study was to figure out how we could measure patient compliance with the prescription regimen. As many of you know, that is now a hot topic and a major public health problem -- patients not taking their medications as prescribed. Well, I was trying to do one of the earlier studies. I was trying to develop a measure for patient compliance, and I figured out the obvious, I guess, that you cannot measure patient compliance unless you know how the drug is supposed to be taken. Right? Right!
Well, part of my design had involved sitting in on the doctor-patient encounter. In fact, I watched over 300 encounters between doctors and patients and took very careful notes. The plan had been for me to then train interviewers to go out and interview patients about what they understood from that encounter. I went back to my office usually, sat down, lo and behold, I could not figure out how those drugs were supposed to be taken.
At the time, I said is this a drug that should be taken every day or just as needed? You couldn't always tell because the label didn't always make it clear. Should the drug be taken for so many days? Should it be taken continuously? I could not always tell based on what I had heard the doctor say, based on what I had reviewed from the medical record, and in those days there was no written information being handed out to patients.
I think the light bulb went on that there was a gap here in communication. Physicians simply were not providing what seemed to me sufficient information to the patient so that he or she could figure out how the medication should be used. Sure enough, when we went to interview the patients, they were quite confused. They made many mistakes. I think that is where I developed a career-long interest in this issue of information -- how can we do a better job of it so that the consumer is less confused, more knowledgeable, more comfortable with their medications?
Over the years I have done a number of studies. This was probably one of the fastest ones, in a year, but I think it is kind of a culmination, if you will, of a number of studies that we have done in VA pharmacies, outpatient pharmacies, community pharmacies and a variety of circumstances. So, we are pulling together, I hope, methods that we have used over the years, and I look forward very much to your comments on the methods that we used, in the next day or so.
Briefly an overview of past studies, without going into too much detail here. I think, as you know, some of you or many of you, there have been a number of studies in this area but they were largely based on patient report. Certainly, I think some of the more important studies -- the most important studies have been done by Lou Morris and his colleagues here, at FDA.
I think these studies have been interesting because if you look at them in terms of pharmacy at least, the reported rates of written information transmission have increased four-fold from 1982 until 1994. Back in 1982, one of the first studies that they reported showed that only 16 percent of patients reported getting some kind of written information. In 1994 that had jumped up to about 59 percent. So, we have seen something that has gradually been changing over time, and I think we need to keep that in mind as we look at these study results because we are looking at a change over probably several decades here, and changing views about what is appropriate, changing views, changing technology -- change.
While there have been a number of studies, few of these studies have evaluated the quality of written information per se, aside from just reporting on the number of people who get it. So, we felt that this was a very interesting study in this respect, in the sense that it gave us an opportunity to look carefully at the usefulness or quality of information.
I should point out incidentally to you, if you are not aware of it, that when you look at the rates of written information out of the physician's office, the rates are quite low. In 1982 only 5 percent of the patients were getting written information from their physician, according to these studies. In 1994, about 15 percent of patients were getting written information from their physician. So, the change that we have seen in pharmacy is pretty remarkable.
Now, how does this study differ from other studies? Data collected by state inspectors -- while there have been studies using the shopper technique, or having someone enter the pharmacy to collect information, I think this is one of the first studies or certainly one of the larger studies using state inspectors or, if not state inspectors, other persons who have been trained to collect written and/or oral information in the pharmacy.
This was important because I think using state inspectors enabled us to minimize the recall problem that we would see if we asked consumers to report. We wanted to minimize that as much as possible. Also, I think we wanted to standardize the stimulus to the pharmacist because consumers have very different interests in information and they are very much likely to have different approaches in terms of coming into the pharmacy. Some people will ask for information, others will not. So, we wanted to standardize that and make it as uniform as possible.
Pharmacies, as you heard from Karen, were sampled in eight states, and those eight states represent four different regions of the United states. These are not randomly selected but they do represent different areas, I think, in the country so that we at least have some diversity.
The criteria -- I think this is the first time that I am aware of that explicit criteria were developed, in this case from the action plan, so that they are visible to everyone. Every one can comment and see what we are doing. I think it is a much better technique because it can be subject to consensus building and, certainly, an attempt can then be made to build or develop standardized forms that can be tested in terms of their reliability. I will comment a little bit about that in a moment. It was also the first time that we had a national panel evaluating the information. So, this was clearly something that was unique.
The questions that we are going to deal with today are in the report, and I apologize to those of you who have done your homework and read the report but we will highlight them. First, what percentage of patients actually get information, that is, the frequency of any information getting to persons regardless of its quality?
Secondly, how do experts rate the quality of that? Then we will take a look at some of the criteria, the specific criteria to see whether certain criteria are being met better than others. That may provide some further guidance or ideas for improvement.
In terms of the methodology, I would like to highlight a few points about sampling, the protocol itself, the pilot study, the forms, and our attempt to evaluate inter-rater reliability, and a little bit about how we accomplished the data processing.
First, let's talk about sampling. NABP, as Karen explained and I don't need to reiterate, there were selected ten states to start out with. We ended up with eight states that agreed to participate in 1999, and they are listed there. I think thanks to her introduction, which was very clear, I don't need to spend more time on this unless there are questions later this afternoon about what the implications of this are.
The next issue, and probably one of the hardest issues for us, the most difficult was how to select these pharmacies because each state has a different system. Some states have their pharmacies all available on a disc; others had hard copy. Some states have an extremely large number of pharmacies; others were smaller. So, this was quite a task. Illinois and Washington, we were able to sample statewide, that is, to randomly sample all pharmacies in that state to get forty pharmacies. That was not possible in the other states due to limited resources, inspector time, etc.
So, other states sampled within certain geographic regions that we helped them select, and usually by zip code. The states varied somewhat by the number of regions that they were able to sample within, with some sampling one area, some sampling four to five areas, and others sampling seven areas. Within these regions though, we had what would be called in the business systematic random sample. That is, you start out in a random way, either table of random numbers or through computer software which helps you select the pharmacies randomly. This is very important, obviously, because we would not want inspectors or state representatives to either be selecting pharmacies that they are worried about or we don't want them to be selecting pharmacies that they think will put on a good face for the state. We really wanted to see a random sample so that we would get as close to a true picture as possible. And, I think we were able to accomplish that in a short period of time, which is very important.
Because Wisconsin started with a pilot study, our sample was thirty pharmacies and in the other states we had forty pharmacies. So, there was a total of over 300 pharmacies in this final study.
Now, the observer protocol -- we referred to the inspectors as patient observers for purposes of this discussion. These observers had minimal training, although they did have some training. We sent out manuals to them and we sent out commonly asked questions. Some of those materials are included in the appendix of the report.
We encouraged the state coordinator to do some role-playing with these inspectors so that questions could come out, and if they did have questions they could get those questions back to us so that Dara or myself could respond, and the effort here was to, as much as possible, get uniformity without having a massively expensive study where we would go out to all of these states and conduct in person or face-to-face training. So, they did have some training and certainly they had a protocol that they were to follow.
The majority of these individuals were, in fact, inspectors from outside the area where they were responsible in their state. So, if someone was responsible for inspecting in Upstate New York, we did not have them collect data in their region. We asked that they collect in another region so they would not be recognized, for obvious reasons, and, for the most part, I think they were able to do this without being recognized. On the observation protocol we had an item, "did you recognize anybody in the pharmacy? Or, do you think anyone recognized you?" If they did, that is if they recognized anyone or if someone recognized them, they were instructed to leave the pharmacy as diplomatically as possible and to replace that pharmacy.
In some cases the state did not have enough inspectors to do this observation within the short time frame so we assisted them, if they needed it, in finding individuals to help them with the process, and they then authorized these individuals to do the inspection.
Seventy-one percent of the visits were made by men, with an age range from 21 to 85 years. The male/female distribution here probably reflects an inspector employment situation. So, it was not possible for us to control or to dictate the demographic characteristics of the inspectors although we might like to see more diversity. The age range there of the older individuals probably reflects the non-inspectors who were hired to assist. Ultimately, we will be able to see whether or not any of the results vary by the characteristics of the observer because the observation form included some very brief items that asked the observer to give his or her age, gender and we, of course, know who was an inspector and who wasn't an inspector. So, we will be able to look at that. My impression though is that this will not be an issue.
The standard scenario -- the observers had a defined health and medication history, and I think that information is provided in the report. Certainly, if you have other questions about that, you can ask later on. But, the idea here was to give the pharmacy or the pharmacist the easiest possible situation, that is, an uncomplicated health and medication history. The inspectors or patient observers were given very specific instructions: if you are asked this, this is what you say. If you are asked this, this is what you say. The idea here, of course, is to try to standardize the background of the patient observer so that they did not have other medications; they did not have other health problems; and they did not have any other extenuating circumstances so that the pharmacists would all be getting the same situation.
All of the inspectors were given cash to cover these three prescriptions so that when the issue came up of how you are going to pay for this, they could say that they were paying for it in cash and they will collect, if they have insurance, somewhere else. We did not want to interrupt the dispensing process, nor did we want to place any undue demand on the pharmacy, so we tried to keep it as simple as possible.
These medications were selected by NABP in collaboration with FDA. So, we didn't really have a role in this, but I think they did a good job of selecting them because you have a short-term medication -- amoxicillin. You have a medication that could be used in a various of ways and for a variety of conditions, which kind of calls for counseling. Then, of course, you have a chronic medication for depression that has other issues involved. So, I think it is interesting then that we were able to get data on three somewhat different types of medication.
Now on the other side, the patient observers were instructed again to give the pharmacist or the pharmacy an opportunity to give information spontaneously, voluntarily, without a request from the patient. So, they were told to be polite and to accept any information that they got. So, if the technician or clerk asked them, or the pharmacist, "would you like written information, they would say, "sure." Or, "would you like to talk to the pharmacist?" "Sure." But they were not to seek information. The aim there was to try to be able to get findings here which would reflect information given spontaneously, and I think this was an important decision because, as you know, certain segments of the population may have low expectations, do not want to bother pharmacists by asking for things, or perhaps they simply do not feel that they deserve more information, or there may be many reasons why people do not seek information, and what we want here is whether they are getting it spontaneously or voluntarily.
They were also told not to ask questions or to initiate talk, unless they were asked a question, and of course they were to respond appropriate. Again, we did not want them taking the time of the pharmacist or in some way influencing the encounter to keep this standardized.
They were encouraged to role-play, as I pointed out earlier, and the common questions are listed in the appendix.
Immediately after they were in the pharmacy, they went out to their car, transportation, or wherever, and immediately filled out a one-page observer form. That form asked them about their characteristics, the characteristics of the person with whom they spoke, and a few other items that basically asked them to describe the situation, including whether or not the pharmacist or person giving them the information mentioned it. I didn't put that in the report but I will try to remember to do that at the end because we have some information about how these sheets were actually disseminated by the pharmacy.
Now, we did a pilot study in Wisconsin basically to test this protocol, to see whether it was working, to see whether there were any problems in it that we would want to know about before asking seven other states to do it. We wanted to find out whether or not one visit to a pharmacy yielded reliable information, and basically to do what most researchers do in any pilot study, to test the protocol, its feasibility, etc.
We also wanted to talk to the pharmacy manager because we wanted to know if we were recognized, or if they recognized the observer, or if they defined this as something that interrupted their pharmacy. So, we got consent from the pharmacy managers. They were told that there would be an unannounced visit at some time in the next time period, but they were not told when nor were they given any details about the scenario. We were surprised to the extent -- they were very cooperative and we got a very wide representation of all types of pharmacies -- very much appreciated.
Two visits were made to each of those pharmacies by two different observers on different days. Managers were later interviewed by telephone. Dara Bultman did those interviews.
We found that one visit was sufficient. That is, if you got information on the first visit you were almost certain to get it on the second visit. There was high agreement in the frequency of written information between visit one and visit two. That told us that was an okay decision, in terms of the national study, to only make one visit to each pharmacy because you would get pretty much the same results.
We discovered, of course, the obvious. We didn't think about it beforehand but, in retrospect, we should have anticipated it, that if you sampled and happened to get two or more pharmacies within a corporate pharmacy or chain pharmacy, you would have to change the identities because their computer profile system is such that you would be detected is such that you would be detected at the second pharmacy as having picked up the same medications in the past month. As soon as we discovered that, we were able to make that change and make the change in other states so that there was no problem there. We also found no other flaws in the design, so we decided that this pilot study could serve as data collection in the state.
We then went on to develop these patient information evaluation forms, and let me say just a few words about how we did that. We had one form for each drug, as you know by looking at the report. The final versions of those forms are provided in the appendices so you can see how that was done.
Each form listed the ten general criteria that we came up with based on the Keystone panel report, and for each of these three drugs there were 28-32 sub-criteria for evaluating the information. That is, for each criterion we had some sub-criteria under it. The reason we did that was, of course, to enhance inter-rater reliability or agreement from one rater to the next because we want very much to have a tool that has good reliability.
The criteria, as you know, were developed from the action plan. We did the best we could. We focused on ten. You might come up with some others that were missed there but I think generally we were pretty comfortable with the way it turned out. We used the ten. We refer to these ten criteria to make the discussion of this easier. So, we will refer to criteria one, two, three, four, five, six, seven, eight, nine, ten type of thing.
The forms were revised until all panelists approved. We sent out version one, version two, and got comments back from panelists, made changes and then sent out the revision. The idea here was probably very similar to the Keystone panel. The aim was to build a consensus and to also build on the somewhat different perspectives of the panelists, although it was remarkable in how much agreement they were. But these forms were agreed upon by the panelists. We found that it was not necessary to meet. Instead, we did this largely through the mail and e-mail and faxing, and so forth in an attempt to keep the study's costs down, and I think we were able to achieve that although we do have some panelist suggestions for the future which I will mention later.
The ten criteria, as you know -- scientifically accurate, unbiased. You are now beginning to recognize those I think so I won't go through them, except that I personally think that the Keystone panel did a superb job in pulling these together. The reason I say that is, in part, because I am familiar with similar initiatives in other countries. I know that, for example, in Australia they are now in the process of trying to do an evaluation like this, or planning one. So, I have seen their criteria which are very similar to these criteria. I am also aware that the equivalent of the FDA is interested in doing this kind of evaluation in Sweden, and it is remarkable to see the similarity of criteria. I think that eventually we could see some international collaboration here and expect that to come out of the World Health Organization possibly down the line. So, I think the leadership that is being shown in the United States is quite remarkable here and will be helpful to other countries.
We had two basic methods for summarizing or rating these things. We called them, for the sake of simplicity, the one-point method and the nine-point method. I will just say a few words about each method.
The one-point method basically gives one point for each sub-criterion with full or partial adherence. It is very important for you to know that this is full or partial adherence. In my view, we are taking a somewhat liberal or conservative, depending on your point of view -- maybe I shouldn't use that language -- the raters, when rating this, gave either a partial check or a full check, partial check if the information sheet covered at least partially the items in that sub-criterion; a full check if it covered all of it. In reporting the results back to you, we are including in here whether it is partial or full. In other words, that means that if you have side effects listed or five side effects listed, they did not have to include all five in the information sheet to get a point. Okay? That is a pretty important point because you could find people who may say you should have nothing but full adherence, and I think that is a policy issue not a scientific issue. That is a policy issue.
We defined, for the sake of summary, five levels of information quality using the one-point system: level zero if they got no information; level one if they got 1-19 percent of the sub-criteria; level two if they got 20-39 percent of the criteria; level three if they got 40, etc. So, you kind of see what we are doing here. We are trying to say what percentage of the criteria were met in that information sheet. So, you can see that when we get to the results.
The other method is a global rating. It is not truly global but it is more global than the one-point method. That is the nine-point method. One to nine points for each general criterion, each of the ten criteria. There you have three levels of adherence. If they gave it a 1-3, we considered that to be low adherence. If they gave it a 4-6, we considered that moderate adherence, and 7-9 high adherence. A 7-9 might be if the information sheet really adhered to the sub-criteria very well. Okay?
You can see how this was done. When you go to the appendices and you look at the rating form, you will see 1-9 and those are ratings that the rater put on there.
Inter-rater reliability -- panelists were assigned to three subgroups by drug. So, we had an amoxicillin subgroup, an ibuprofen subgroup and a Paxil subgroup. The reason we did that I think was to enhance reliability, to make it easier for the panelists because once they became familiar with the criteria it would enable them to do an effective job on this. To make them try to review all drugs I think would be quite difficult and would probably require more time than we had. So, we were trying to streamline this as much as possible.
Each subgroup included one practitioner and two drug information specialists. We are eventually going to evaluate reliability in this way to see if the practitioners had different views than the drug information people. But this is a method that is used in other health professions when rating practices. So, we were borrowing and building, if you will, a little bit on what other health professions do when rating the quality of a practice. In this case we are evaluating information practices. Panelists for this reliability study independently reviewed four items.
Then we calculated statistics, and if we looked at the total summary we got a Pearson correlation of the total score of 0.95, which those of you who are into statistics know is a very good score and suggests that the raters were in agreement, high agreement with each other even though they did this independently.
But when you looked at the ten criteria you saw some differences in reliability that I think I would like to come back to later this afternoon. I would like you to remember this. If you look here, there is a Pearson correlation for 0.9. That means that the agreement is not quite as good because it is less than 1. That is for item criterion number eight, about legibility and comprehensibility. But the Pearson correlation is very close to 1, meaning very strong correlation, for criterion number two, and the others were all over 50. That is, the most disagreement among the panelists was on this criterion, whether it was legible and comprehensible. I would like to suggest later that we need to address that in the future work.
How did we process data? The packets were sent to UW Madison where we removed identifying information. We made copies and then we sent these copies and a packet out to the three panelist subgroups for their rating. Items were divided among the three panelists. That is, they were not multiple gradings by the panelist but each panelist handled about one-third of their groups. Okay? Items were independently reviewed by the panelists. Thus, each panelist rated one-third of their group or one-ninth of the total, if that is clear. Okay?
Another way to think about that is in a given pharmacy the patient might have gotten three information sheet, one for ibuprofen, one for amoxicillin and one for paroxetine, and the way our system worked you would have three different raters rating this, which I think is actually an advantage.
We are now at the point of starting to look at the results. The question is whether we should have a break here or whether I should -- should we have a break here? I feel right to have that, and then we can start with the graphs. Thank you for your attention.
DR. OSTROVE: It is about 2:50 now. Why don't we try and return by about 3:10?
DR. SVARSTAD: The intent here is to quickly go through some of the findings, and I should add that I asked the FDA staff at CDER to make some photographs of some of the actual information sheets so that you can see some of the examples, and I have added those to the file at the end. So, I would like for us to have some time for that, and I hope that we don't really need as much time as we originally said for the findings so we have more time for questions and discussion.
Percent of patient observers given any written information by drug -- I thought it would be pretty deadly to put those tables up here so my secretary has converted these into some graphics. But for those of you who want more details in terms of number of cases and exact percentages, you will find that in the report, tables 1 through 3. The results will be presented by ibuprofen, amoxicillin paroxetine.
Wow -- that was my first thought when I saw these data. This suggests or shows that over 80 percent of the patient observers in eight different states received at least some written information. Now, if you remember back to the summary that I gave you of review of past studies, I think the last report was for 1994 and at that point 59 percent of the patients were getting written information. Our study would suggest that at least for ibuprofen over 80 percent are getting some written information. The same with amoxicillin. Same with paroxetine. In other words, there is consistency across drugs.
Now, what do we make of that? I think that at the very least, regardless of what you say about content and about usefulness or format, we really should note the good news here. The good news is that three out of every four or more, by these data, are getting some written form of information, and I think this reflects remarkable progress and it is important to get that word out because I think that is not the case in other countries and it certainly wasn't the case in this country sometime ago. So, we need to always put things in historical context and to report the good news along with any concerns that we have.
Distribution of ratings using the one-point method, and this is for an N of 306. That is 306 pharmacies where we had complete data available. This is the first distribution of ratings that shows the percentage of sub-criteria that had full or partial adherence. You remember I said we had levels zero, 1 through 5, with 5 being the highest and 5 meaning 80 percent of the criteria being met; zero meaning none.
For ibuprofen, you see that the bulk of them are in the level 4 and 5, but that the majority did not receive information at level 5. Amoxicillin was somewhat better. That is, we see some variability here by drug in terms of the content. Now, for amoxicillin, that basically means that over 60 percent of the patient observers received information at this level, the highest level. The rest of them were variable. This percentage, here, reflects the 13 or so percent that got no information. The same with paroxetine.
Let's look at the individual criteria so that you can see some differences by criteria out of the one out of ten. What we have done is put some charts together that show ratings by criterion. The first one is does the information include information about the drug and its benefit? This is excluding the people that did not receive any information. Okay? I am pretty sure about that, although this slide is a little ambiguous. Your tables will show it more precisely.
Seventy-eight percent of those information sheets had high adherence, that is, rated in the 7-9 category by the raters. That is pretty good, meaning that they are pretty good about giving the information about the drug and its benefits; 79 percent of the amoxicillin was rated highly, and for the paroxetine 77 percent. So, that is pretty similar across drug type.
But you do see about 22 percent that are rated either moderate or low on this, meaning the raters felt there was not sufficient information. As you know, sometimes this could mean that the pharmacist is giving out a short form of the drug information, or it could mean that the information sheet, even the long form of the information sheet does not include enough information about the drug and its benefits.
Two: specific directions. Thirty-three percent of the ibuprofen sheets were rated as high adherence on specificity of directions -- room for improvement there. Amoxicillin was better. So, you see some variability here. Paroxetine, 59 percent. So, you don't see the high levels that you saw on the first criterion.
Contraindications and what to do, criterion 3, pretty low on ibuprofen according to the raters, only 12 percent of the information sheets met high adherence on this particular criterion. A little bit better on amoxicillin, right here you see 31 percent high adherence; 49 percent and 20 percent moderate and low adherence; and about 21 percent for paroxetine.
Now, if you have specific questions -- I am sure you have discovered this about the report -- you can go to tables 4-6. We are not going to do that right now and get sidetracked, but you can figure out by going those tables to see which of the sub-criteria were not met according to the raters to get some feeling for why this was so low, or another way to look at that is why these weren't higher. In other words, which of the sub-criteria were not being met. This is kind of a summary.
Precautions and how to avoid them -- ibuprofen, only 5 percent -- I think that is a mistake. No, let's see. I would have to check that. Is that a mistake?
AUDIENCE PARTICIPANT: It is 5.3 percent in the report.
DR. SVARSTAD: Yes, I would have to look at my table to see that, but that is pretty close to what the report is. But I know there was one table that we fixed because we found a typographical error, but the report as it is now I think is okay.
You see consistently, fairly consistently here that ibuprofen received lower ratings, but paroxetine isn't getting very high ratings here either. So, in the area of precautions and contraindications the ratings were not perhaps what you would like to see.
Adverse drug reactions, considerably better. Over 80 percent met high adherence according to the raters. About 80 percent met it for amoxicillin and about 77 percent for paroxetine. So, you can begin to see that there are some real differences by the criteria that you are talking about here, with contraindications and precautions receiving lower ratings according to the raters.
Storage and general information -- again, there were some lower ratings here and you can look at the more detailed tables to see which items were excluded to get an understanding for why these are low. I think when we discuss tomorrow, the question is, well, of all of these which ones are criteria that are really important, and should storage be given the same information as precautions and contraindications? In this report we have gone through each one systematically and not made a judgment as to which one should receive higher or lower weightings because the Keystone report did not make a distinction. So, we felt it was inappropriate for us at that stage, but I think it is very important for this discussion to take place tomorrow.
Unbiased in content and tone -- no question that the raters saw these as unbiased in content and tone. If there was any criticism at all -- these cases, here, it was because the balance was on more risk information than benefit information, and maybe there should be a little bit more of the benefits so that consumers can make a judgment here between benefits and risks.
I find it personally interesting, for example, that many of the information sheets might only have one or two lines about the benefits but many, many lines about the contraindications, and everything we know about patient adherence is that patients need to have an understanding of the benefits of something if they are to take it for life. So, I think this discussion should take place at some point. The Keystone report was not as clear on this as I might like, and I might suggest that we have some discussion about what kinds of benefit information do people really need. For example, in Wisconsin we were helping a local clinic redesign their information sheets and it happened to be the Dane County Mental Health Clinic for persons with severe mental illness, and we held focus groups with persons who have diagnosed schizophrenia. We were somewhat surprised to learn from them that they wanted -- I don't know why we were surprised but they wanted more information about the benefits and they were quite serious about this, and I think they had a very good point. They are putting up and tolerating with a lot of side effects and it would be nice to know, when they start therapy, what the odds are of them being able to improve, and in what areas they are going to improve, for them to be making this decision. I think they have a valid point.
Legible and comprehensible -- 85 percent rating across but you see some problem here, and I would remind you that this is one of the items where the raters were less in agreement and I am going to show you some data a little bit that may suggest that experts are not the best people to ask about legibility and comprehensibility. First off, they are all pretty young; they don't have macular degeneration and other issues that older people do, and the information is second nature to them so I have a feeling they are not so good about judging whether or not something is comprehensible. So, I think we really need to discuss that point.
Scientifically accurate -- no question in the first two drugs. This does not suggest that it was scientifically not accurate. As I recall, it was less likely to be a disclaimer or other information missing for this particular one but you could look at the detailed reports on that but I think they came out very good on this criterion.
Up to date in publication -- that means was the information up to date scientifically and did in include information about who the publisher or the vendor was. So, in the business of social science that is an item that is kind of difficult because you are mixing apples and oranges here, in my view, but at least I am honest about it. You can see on the more detailed report which of these were not met. In any case, you see that about 50 percent were both up to date and had the publication information; 55 percent and 45 percent -- pretty similar across the way. We could not always tell who the vendor was, and I am not sure whether this was because some information -- well, we really can't be sure because we couldn't go back to these pharmacies to ask. But on many of the sheers, of course, we could tell who the vendor was but on others we could not always tell who the vendor was.
I think one of the interesting things about this project is that in the original contract we were not asked to analyze by vendor so we have not done that. You need some staff to do that. I think we could do that to facilitate the process here. It probably wouldn't take too much time, probably a graduate student could do it in a while, in the summer. But there is a lot of variability in how it is that the pharmacy prints out information from the same vendor.
So, even though you have 50 information sheets from one vendor, you could have 30 different styles of it being printed out. Does that make sense? That is, some people might print it out in small print; others print it out in large print. Some people print it out in a printer that has sharp contrast; others print it out in a dot matrix printer that you can barely read. Some print it out on a sheet of paper that is very easy to see in terms of glare; others might print it out on a small sheet of paper with watermarks underneath it and you can see more of who owns the pharmacy than what you can reading the information. So, even though you might think that we have only a few sheets here and they are duplicated, actually we got an enormous variety because the pharmacy still has great latitude in how it uses that information or how it prints it out, not to mention maybe how it selects. But, we weren't able to find out whether they selected out. So, I would be a little careful about even doing a vendor analysis unless you had what the vendor originally gave them, which would require -- it is kind of like evaluating term papers and whether or not there has been plagiarism, only just the opposite I guess. We have done that.
Summary of the main findings -- now, before we get to some of these specimens, 87 percent of the patients received some form of written information. I think that is the good news of this report, suggesting considerable progress.
Over 75 percent of the items received a high rating or high adherence for criterion number one, drug and its benefits; for criterion number five, adverse reactions; seven, unbiased in content and tone; eight, legibility, even though I have some concerns abut that one; and, nine, on accuracy and disclaimer. That is, in these areas the raters viewed over 75 percent of the items very positively.
But it seems clear that improvement is needed in other areas such as criterion two, on specificity of directions, which is good for some drugs but not all drugs; criterion three, on contraindications and what to do. It is not because contraindications are totally left out; it is because maybe they are not dealt with as fully as they should be. Okay? It is always important to remember that. It is not that contraindications are omitted; it is that they are not as complete as the raters felt they should be. Criterion four, on precautions; six, storage and general information; and ten, on publication information. Information quality varied somewhat by drug, as I have pointed out several times. So, I think in general the results carry some very positive results, as well as some concerns.
Now, what are some other findings that I think are important to highlight for this first attempt? I think that given the fact that this has never been done before in this way, we at least were able to develop a method to get excellent inter-rater reliability using standard forms. This is an accomplishment that will bode well for the next time around if further evaluations are done.
Another finding I think is that we had lower inter-rater reliability for criterion eight, which is on legibility and comprehensibility. I think this suggests further discussion and work, and I have some ideas about that even though I haven't put them in the slides.
I think there are study limitations that we certainly, and many of you certainly, are aware of, and let's try to highlight a few of those so that we keep those in mind as we meet for the next day or so.
At the top of the list I think is that we gave equal weight to these criteria and sub-criteria even though you may identify a variety of reasons why you would want to weight one more than another, which is a policy issue and not a scientific issue. From whose perspective? From FDA's perspective? From drug manufacturer's perspective? From health professional perspective or from consumer perspective? I mean, there are so many perspectives here and ways that you could weight these that I don't think it is a simple question.
We have states that volunteered for the study. My one concern is that maybe those states have a different level of practice than the other states. We don't have any basis for making that conclusion because we don't know really what the situation is in the other states. I am less concerned about that one though because I feel that much of what we saw was in chain pharmacies which span state boundaries. So, I am not really terribly concerned about differing quality on written, although I should tell you that, in fact, what I have done is to analyze whether or not there were differences in the prevalence of different information by state to see if there sere any differences. There were not.
But I can tell you that there are wide differences in oral counseling. The purpose of this workshop is focusing on written so I am not able to take the time to go through some of the findings that we got on those one-page forms with the inspectors, except to tell you that in the future I would suggest, and believe it is quite important, that we always remember that written goes with oral, and it is hard to disentangle those. I understand the congressional mandate but I do feel that it is important for us to always keep returning to the fact that written information should not be substituting oral information, and we always need to keep that in mind.
I did go through the forms, the observer forms, to see how it was, for example, that the pharmacist disseminated that written information sheets. What we found was that only 35 percent of the written information sheets were given to the client or the patient, patient observer, with some kind of mention or with some kind of oral review, or with some kind of encouragement to read it. In other words, in the majority of the cases, according to the state inspectors, these written information sheets are being stuffed in the bag. They are not being discussed, reviewed, or mentioned in a positive way by the pharmacist. And, I would warn us all to remember that what evidence we do have on the effects of written information would suggest that their efficacy depends on oral review.
The studies, for example, on the effects of written information on patient compliance would suggest that you really need the review by the pharmacist or professional if those things are going to make a difference. Why? Because if you are not encouraged to read it, many people will not read it. But if you are encouraged to read it, people will read it. And, if the pharmacist or health professional points out sections of it that are important I think the patient is much more likely to read it. Now, in this case, this is not an issue because we are talking about state inspectors. But I do think that we always need to keep remembering that oral goes with written. So, I will stop there and not beat that one too much into the ground.
Thirdly, sampling procedures varied somewhat from state to state. I guess we always need to remember that we can have differences in rural and urban areas. I think we focus more on urban areas, although we had some rural states so we maybe got some rural here. But it would be desirable to have uniform sampling so that we can make sure that we have the kind of sampling that can enable you to draw conclusions more broadly.
We had limited training of the patient observers, and we did not have a very good way of controlling or reviewing their work. It is not that I am questioning it but I think in any study you need to bear in mind that if you have this many people in eight different states, it is possible that some people interpreted their role or their job a little bit differently than others, and I cannot really comment on that, except to say that we were not able to give them the kinds of intensive training that you would in a study where you had maximum control. I am not terribly concerned about that, but I put it out there because I think we always need to remember it and bear it in mind when interpreting results.
My most serious concern I saved for last, and that is with a contract focused on an expert panel which did not include a consumer panel or a consumer input. And, I feel that based on the results of the inter-reliability, the experts had some difficulty deciding whether things were legible or comprehensible. That is kind of a flag number one.
I do think we should consider including some mechanism for consumer input the next time around. I think this could be done in a number of different ways that I would encourage you to discuss or think about, including ways that you might suggest. One might be to add consumers to the panel. That is okay. It is not quite as interesting but it is possible that you could do that. A second way might be to have two types of panels, an expert or professional panel and a consumer panel with them having distinctive roles or distinctive responsibilities, with the professionals being more responsible for evaluating the accuracy and the completeness of the information and the consumers being more responsible for evaluating the potential usefulness, the readability, the comprehensibility, the ease of following the thing, and so forth. So, you could think of ways that these two panels could complement each other. It is possible that you could take that criterion, number eight, and take it out of the expert forum and make that a consumer evaluation and put the two together.
My colleagues and I and Dara Bultman actually spent a little time working on this in November and December, where we did develop a consumer rating form, and we developed a form for evaluating in more detail the design and format of the information sheets, and we came up with some very interesting results. That convince me that the consumer has a unique perspective here and that we should be eliciting it.
Now, some samples -- we selected a sub-sample of information sheets from the sheets that were collected in November, and I put together a consumer panel of 24 consumers, 12 individuals with a college education and 12 with a high school education. This is not described in your report anywhere so there is no need to look for it. It is a small pilot study, a small method study that we did with other support and it was not part of this contract, but because I know we are discussing all of this I thought you might be interested in seeing these samples and maybe hearing what some of the consumers said about them, even though this is not a large-scale study.
Here is one of the specimens. I have removed the identifying information up there. This is for ibuprofen. It came from pharmacy 644. You can ask yourself whether that meets the criteria that you think it should be meeting, but when we look at the consumer -- you can't read it? Well, neither can I.
You know, there is no way I could put this up so you could see it without changing the very nature of it. What I want you to pay attention to is that this is an exact photograph of it and I wanted to show a couple of things. I don't want you to get into your pharmacy role or your technical role of evaluating the content. I would like you to kind of see the style of these things in general. That is, how close are the lines; how are they organized, etc.
Now, we may have tried to figure out some way that we could get a sample of these available on the net -- I don't know, we haven't really talked about this but, trust me, we want to facilitate improvement here so, aside from releasing these which I think we would find difficult to do because they have so much identifying information on them, I want to just give you some idea here about the range of what we got and how consumers evaluated it.
In this particular one we asked the consumer to rate, for example, organization; attractiveness; print size; tone, whether it was alarming or encouraging; how helpful it was to them; and spacing between the lines. In this particular one the consumer gave it a 1 on spacing, poor. If you really could read this, if you saw the original, you would probably agree with that consumer that it was pretty hard to read this one. The patient said the print is too small; the lines are too close together.
The patient gave it a 4 on organization. You see the headings there. So, maybe that consumer thought it was pretty well organized.
How attractive is it? They gave it a 2, unattractive. Print size, they gave a 1. They gave a 3 on helpful -- 5 would have been helpful; 1 is unhelpful. And, 1 on poor spacing.
Interestingly enough, when you looked at the consumer evaluations on this, often when it was poor lines, poor print, poor spacing, etc., they often were critical about the whole thing, probably because it was just difficult for them to read.
Let's go to another one. Can you read that one? This is pharmacy 601. You see here that the name of the drug is kind of underneath here. It is ibuprofen. I have put yellow over here to cover the pharmacy name. This is the entire sheet. That is all the consumer got.
I thought this was kind of interesting. "This specific information may or may not apply --"
"-- to your condition. Please consult your doctor." I teach pharmacy students and this one would not pass!
Why am I taking this drug? For arthritic conditions, pain, inflammation, fever. That is drug benefits. Well how should I take it? Take with food, antacid as directed. Tell M.D. of other drugs you use/diseases you have, allergies or if pregnant. Limit alcohol intake.
Are there any side effects? Dizziness, drowsiness, report. Eye/ear problems; urine color change; black stools; difficulty breathing; mental changes; sun sensitivity; stomach pain.
How do I store this? Store at room temperature, away from moisture and sunlight. Do not store in the bathroom.
If I should miss a dose? Take missed dose as soon as remembered but not if it is almost time for the next does. Do not double up.
See, here is an example where we couldn't tell who really created this information. There is no publication information, no vendor, etc.
This is what one of the members of our consumer panel said about this, "very easy to understand, very easy to remember, very easy to locate what information you wanted." This is, of course, the tradeoff of giving information. You know, if something is easy it often is a little bit limited in terms of amount but at least they gave them pretty good ratings on that. How likely are you to read this? Somewhat likely.
Then, when we get over to the other side of the form they said, about right on the medication, the amount of medication. Below is a list of topics, please indicate your opinion about how much information was provided on each topic and how useful you think this information would be if you were taking this medicine for the first time. Medication and benefits, they thought that is about right. Who should not use this medication? Too little information. Specific directions about how to take the medication? They said too little. Precautions that need to be taken while using the medication, they said too little. Possible side effects and what to do, about right. How to store it, about right. I think that is kind of interesting, you know, that they are making distinctions here.
When they were asked to rate the organization, they gave it a 2 out of 5, with 1 being the lowest. Attractiveness, they gave it a 1, which is the lowest rate of attractiveness. They gave a 1 on print size; a 3 on helpfulness; a 1 on spacing even though the spacing looks pretty in comparison to others.
Now let's go to another sample. I could try to read through all that but I am not sure that would be very useful to you, but this one is a little bit more detailed. You can see here that they have some use of headings but they are all caps, which is not recommended because people find it hard to read when it is all capitalized. You see headings -- uses; how to take; side effects; precautions; drug interactions; what to do with missed dose; and storage.
When we asked the consumers what they thought of that one, they said, for example, when we go over to amount of information they thought there was about the right amount of information on directions, precautions and side effects. Then they had some differences of opinion about the medication and its benefits.
When they were asked how well organized it is, they gave it a 4. When they were asked how attractive it is, they gave it a 3. When they were asked about print size, they gave it a 2. When they were asked whether it was helpful or not, they gave it a middle of the range, a 3. When they were asked about spacing between lines, they gave it a 2. In other words, they thought it was not spaced very well, too close together.
Here is another one. Now, you see actually this is put out by a vendor -- maybe you can't read that; let's not worry about that. You see here common uses; how to use the medication; cautions; possible side effects. You see here again is this tendency to use all caps, and to use them quite liberally. You see here all these caps; all caps; all caps; all caps.
Let's see what the consumer said about that one. I hope I am not boring you with detail here. This one said it was pretty easy to read. It was pretty easy to understand. It was pretty easy to remember. Very likely that they would read it; very likely that they would use it. About the right amount of information on each one of the criteria, the medication and its benefits; who should not use it; specific directions; precautions; side effects, etc. They have it 2's on everything there, about the middle.
When asked about organization, they gave it a 4. When asked about attractiveness, they gave it a 5. I think there is a tendency for them to give higher stuff with some color and maybe the bolding facilitated that score. Print size, a 5. Helpfulness, a 5; and spacing, they gave it only a 3. They probably were pretty liberal on that one because there is not much spacing there but these are consumer evaluations.
Let's try another one, 721. Now, actually, it looks pretty similar to the other one, doesn't it? You again see the same organization. Now let's see what the consumer said about this one.
About the right amount of information, a 2, a 2, a 2, a 2, a 2 on each one of the topics, about the right amount. They did not rate the organization, so missing data. They put a 2 as how attractive it is. This person didn't consider it very attractive. Print size, they gave only a 2. Spacing between the lines, they gave it a 1. In other words, they were kind of critical there of that one for spacing.
Oh, boy! Actually, this is an example of something I think that comes from a dot matrix printer. This is an entire sheet and then this patient got, in addition to this sheet, this little additional sheet with it. Now, you can't read this maybe from back there but this says Paxil 10 mg tablet. There is a 1 there with no explanation of what that is. May cause drowsiness. Then there is an "A" and "LCOHOL." There is a poor wrap here I think. This is the way the patient got it. "A LCOHOL intensifies effect; use care using machines." Then there is an 8 which says "do not drink alcoholic B EVERAGES --" the wrap is not very good -- "when taking, space, space, this medicine." Then there is a number 13. Take or use this exactly "A as" directed. Do not skip doses or discontinue. Call doctor before taking OTC drugs. Some may affect "A CTION" of this med.
Now let's see what the patient said. Very likely that they would read it; very likely that they would try to use it. They thought the sheet in general was well organized. They gave it a 3 on attractiveness; a 3 on print; and a 2 on spacing. They added these comments at the end: The print type was somewhat hard to read, however, I liked the organization layout. The small sheet attached is not helpful, and unorganized. So, that is what they said on that one.
Now, what we decided to do, and this is about where I am going to end, in the process we decided, and unfortunately this doesn't come out very clearly either because it is an exact photo, but what we did is to take the Keystone criteria out of the appendix on design and format. I think it is Appendix G, as I recall. We took that and as carefully as we could used those criteria to redesign three sample sheets, three actual sheets that we collected.
The idea here was to take the design format guidelines and redesign it so that it met the Keystone criteria, and to keep the content exactly as it was in the original. Is that clear? So, for example, the Keystone criteria may say something about type, and it may say something about the use of headings without all capital letters. It may say something about sharp contrast. It is easier to read something when it has sharp contrast. It says something about font size. It says something about using bullets. It makes it easier to read for people. And, it says a number of other things.
Basically, what we did then was to create this sheet. I have numbered it here. This sheet is kind of similar to the prototypes that are in the appendix of the Keystone report. We took these sheets and we blended them in with the actual sheets and, actually, we randomly ordered them so that the patients on our patient panel would sometimes get this one first and sometimes they would get it last; sometimes they would get it second; sometimes they would get it third. Our intent here was to ask them to rate real sheets and a design sheet that we tucked in there, what we might call a model sheet. This was one of the model sheets, and we did not tell them that we put a model there. We said these were all sheets that we had collected. That is a little bit of a mislead.
We had 24 consumers, as you remember, 12 that have a college education and 12 that don't. So, we had 24 total. We asked them to compare on organization this sheet with the sheets that I have been showing you, the real sheets. The real sheets got a mean of 3.5 and the model got a 4.8. We asked them to rate on attractiveness. The real sheets got a 2.8, the model got a 4.3. Print size, the real sheets got a 2.6, the model got a 4.9. Spacing, the real sheets got a 2.5, the model got a 4.7. Whether or not they were helpful, the real sheets got a 3.7, the model sheet got a 4.7. All of these were statistically significant, as you might guess but it made it interesting nevertheless.
When we asked them how easy it was to read, they gave the real sheets a 3.6 and the model a 4.9. When we asked them how easy is it to understand, the gave the real sheets a 4 and the model a 4.9. How easy to remember -- this is kind of interesting -- they gave the real sheets a 3.5 and the model a 4.5.
We did some other things here but I think that is enough to perhaps stimulate your thinking. These results were also statistically significant. Now, what this tells me I think is that if we did put together a consumer panel, they might have a very different view and they certainly would have a somewhat critical view of print size and how this information is laid out on the sheet, which may or may not have anything to do with the content of the information as it is put together by the vendors.
So, we have a situation here where we need to be concerned, of course, about content but we also need to be concerned about how these sheets are being printed out and how they are being presented to consumers and what consumers think of this because it could get away from us, so to speak. In the interest of having something scientifically accurate, we have something that really is kind of a mess potentially, not because anyone is intending it to be that way but because we don't have clear standards, or we don't have a clear way of evaluating it. But I think there are some ways to evaluate it and it is something that we can discuss in the next day or so, something that we weren't able to do as carefully but I think we have a few ideas.
I think I will end my presentation there which, fortunately, is sooner than five o'clock, and I very much appreciate your attention. I know that many of you have read the report and I appreciate your indulgence with me going through the details. I now will turn it over to Nancy who will, I guess, monitor the questions or however you plan to do it.
DR. OSTROVE: We are finishing a little earlier than we expected, which I think will probably be fine for everyone. No one really objects to finishing earlier. This also gives us a little bit of time for questions, more time for questions before I give you all my little notes about tomorrow.
There are two mikes, one in the front and one toward the back, and we would appreciate, if you have questions come up to one of the mikes, and if you could give your name and affiliation so we have a sense for who is involved, and then start. Okay?
MS. LEUNG-VEGA: Jane Leung-Vega, from Merck-Medco. I actually have two questions. One of the questions is what was determined as up to date information, that is, once the FDA or whatever criteria you used to set the information is scientific information, how soon would they need to have that published or available to the patient?
DR. SVARSTAD: As I recall that, we gave them the most recent approved labeling for the medication, and the panelists had agreed that they would use that as the way to determine whether or not the information was up to date. So that was the most recent at the time that we developed and approved the forms, which would have been in spring of '99. Does that answer your question?
MS. LEUNG-VEGA: Well, what I wanted to know is, like, if the FDA approved a new individual how soon should that information be available to the patient? Three months? Six months?
DR. SVARSTAD: I think that is a good question that we, in a sense, did not have to face, or maybe we chose not to face it because in the interest of doing the evaluation and having a standard form I think we were making the assumption that the approved labeling for the date at which that form was developed would be the one that we would use. I think our intent, original intent had been that these data would be collected even more quickly than we were able to collect, but I think your question is a good one for future. And, as I recall, there is no guidance on that.
MS. LEUNG-VEGA: Okay. My second question is, of those vendors that you could identify in the written information, was there any that actually met a lot of the criteria?
DR. SVARSTAD: That is another good question. The contract did not include staffing for analyzing vendor differences. This is not to say that we can't analyze that but we have not analyzed vendor differences.
MS. LEUNG-VEGA: Okay, thank you.
DR. SVARSTAD: You know, I am sure it would be useful to the vendors.
Several of us were talking during the break and if anyone has ideas about, you know, what we could do to do additional analysis to assist the vendors in this, I would be more than willing, but also remember that we need some staff to do that. So, if you have some ideas -- resources are always an issue but I think it is an important question. I did not realize really that there could be as much selection here, so that probably makes me even more cautious about reporting vendor differences because you are not sure -- are you reporting vendor differences or are you reporting what the pharmacist chooses to publish. The two are very different questions.
MR. SASICH: My name is Larry Sasich. I am from Public Citizens Health Research Group, in Washington, DC. I am somewhat maybe confused or concerned about the expert panel's definition of scientific accuracy. It seems like scientific accuracy may have been applied primarily to FDA-approved indications that you found in the patient information leaflets while, in fact, the intent was that scientific accuracy covers all of the labeling and that essentially means it is derived from, and consistent with, the FDA-approved labeling.
In your patient information evaluation form for paroxetine, for example, your criteria five -- in the criteria that you used, one of the criteria --
DR. SVARSTAD: Criteria that the panel approved.
MR. SASICH: Yes, other side effects may occur. Check with your provider. If you go back and look at the Keystone criteria, particularly component (e) which deals with paroxetine, the PIEF you used failed to identify 14 precautions for the drug. Under component (f) for adverse drug reactions, the PIEF missed 25 adverse drug reactions. And, I know that we didn't have a clear definition of frequently occurring adverse drug reactions when the Keystone report was written but there is a regulatory definition of frequent adverse drug reactions, and those are the ones that occur with at least a frequency of one percent. And, do you really think that it is adequate communication of risk information to patients to say other side effects may occur and check with your doctor? That is the issue that we are trying to solve, and that we have been trying to solve for so long.
DR. SVARSTAD: Do you want to try and answer it?
DR. OSTROVE: I think one of the things that we tried to do with the interim report was to get an expert panel and have the experts to do this because these are experts in pharmacy and in communication because there are issues, and I think there are concerns always, with regard to how much information is useful; how much information ends up being too much information so that, in fact, what the patient ends up getting is so much that they are totally overwhelmed by all of it and it dilutes the important information.
So the attempt in this particular instance was to have the experts make these determinations as opposed to, for instance, to having FDA making the determinations. That is one of the reasons we are here today, actually, to hear from the public as to, you know, where they feel the limitations are. So, I think it is very important that you give us these perspectives. Larry, we really do appreciate it but I am not sure that it is something that Dr. Svarstad should be answering because it is really in some sense a policy question.
MR. SASICH: Nancy, just one comment, one of the consumer groups' major points in the Keystone meeting was having to deal with the paternalism of health professionals that wanted to decide for us what information they think we should know. The Keystone criteria defined useful information and one of those criteria was informing patients about frequently occurring adverse effects.
DR. OSTROVE: That is true, but there is not actually complete agreement about what frequently adverse events is.
DR. SASICH: I know, and I said that.
DR. OSTROVE: For instance, one out of a hundred, for certain products that is appropriate. For certain products it might be five out of a hundred. For certain products it may be even more. So, that is a difficult concern. It is one that is important to raise --
MR. SASICH: Sure.
DR. OSTROVE: -- and it is difficult to know where to draw the line on that.
MR. SASICH: Well, we do have a regulatory line.
DR. OSTROVE: In terms of what frequently occurring means for the package insert --
MR. SASICH: Right.
-- but that is not necessarily what frequently occurring means for a patient package insert.
MR. SASICH: But to be useful it has to be consistent with or derived from.
DR. OSTROVE: But to be useful does not necessarily mean that it has to have every single piece of information --
MR. SASICH: No, only the ones that are frequently occurring --
-- so please let me know how you would make the decision about what risk information should be withheld from consumers because a group of pharmacists think they don't need to have it. That is not the point of this whole exercise.
DR. OSTROVE: I think you are right, this is an important concern but I am not sure it is one that we can determine right now, and I am hoping that you will discuss it in your group tomorrow.
MR. SASICH: Probably.
DR. OSTROVE: All right!
MS. POWELL: Marjorie Powell, from PhRMA. I have two questions that were unclear to me from your presentation. The first is that in the summary results that you put up there you included as one of the sub-criteria for criterion nine a disclaimer. But when I look at your tables, I find the disclaimer included within criterion six. It wasn't clear to me where the disclaimer got counted.
DR. SVARSTAD: There might be a couple of disclaimers. I would have to look at the forms.
MS. POWELL: Okay. Maybe you and I can talk about that later.
DR. SVARSTAD: Yes.
MS. POWELL: The other thing that was --
DR. SVARSTAD: As I recall the guidelines, there was a number of disclaimers. I mean, if you get back to the side effects question, I would certainly --
MS. POWELL: Because, as I remember, the disclaimer explicitly was this document does not contain all information; talk to your physician.
DR. SVARSTAD: It could have been a typographical error. Which slide are you referring to? Do you recall?
MS. POWELL: It was the one that described --
DR. SVARSTAD: Summary of findings?
DR. POWELL: -- summary of findings, criterion nine.
DR. SVARSTAD: Let me just find it here. You have to go to the original form to see this. If you look, for example, at table 4, which is for ibuprofen --
MS. POWELL: It may not be worth taking everybody's time to work through it.
DR. SVARSTAD: No, that is okay; that is why we are here. No, item nine is written information that is scientifically accurate. One of the sub-criteria under ibuprofen, for example, is that the indications for use are consistent with FDA labeling, and that is listed out. Then it notes that the medication may be used for other purposes.
MS. POWELL: Okay, so you are using --
DR. SVARSTAD: And, that the general guide does not include information about non-approved uses.
MS. POWELL: That answers my second question. You are using the term disclaimer to be a general reference to a statement that there may be unapproved uses --
DR. SVARSTAD: Yes, for the purposes of reporting here. The word disclaimer does not appear on the rater's evaluation form. If you look at item nine in either table 4 or table 5, you see, for example, under amoxicillin, item number nine is that indication for use is consistent with FDA labeling. The second sub-criterion is encourages communication with the provider. Then, the third point is that the general guide does not include information about non-approved uses. So, maybe I am not using it as you would.
MS. POWELL: I think that is what the issue is.
DR. SVARSTAD: Okay.
MS. POWELL: But then that leads to my second question, if the patient information included information about an unapproved or an off-label use --
DR. SVARSTAD: Which if does not -- oh, if it did, yes?
MS. POWELL: If it did, how did you deal with that?
MR. SASICH: For the patient information evaluation form for ibuprofen --
DR. SVARSTAD: Excuse me, let me clarify something here. I realize the controversy about the approved and non-approved uses. I do not want to get in the middle of it.
MR. SASICH: Okay, I am just saying that you evaluated an off-label use, and that was Marjorie's question.
DR. SVARSTAD: Excuse me, may I answer, please?
MR. SASICH: Yes.
DR. SVARSTAD: The panel discussed this. We decided to go with the Keystone report --
MR. SASICH: Yes?
DR. SVARSTAD: The general guide does not include information about non-approved uses. If it listed a non-approved use I, frankly, cannot remember how we handled that. Duane, do you remember how we handled it? I do not remember seeing an instance of that. This is Dr. Duane Kirking. Do you want to comment on this?
DR. KIRKING: Yes, I was trying to think back to exactly what we did as I was coming up. Quite honestly, I can only speak to the drug I did; I did not do them all. We didn't see any. I don't think we saw anything --
DR. SVARSTAD: I don't think we saw anything.
DR. KIRKING: -- in any of the labeling. I didn't see any in the one I did ad I don't think any of the others did. That doesn't mean it isn't an issue, but I don't think we saw any.
MR. SASICH: In ibuprofen?
DR. KIRKING: I know we had some controversy about it and I was one of those, in fact, that wanted to see more discussion of non-approved uses. This was when we set the whole process up before we actually looked at the materials. I think the decision was made -- I think we were told, as I recall, that we weren't going to be considering non-approved uses at all in our evaluations.
MS. POWELL: Thank you. I didn't want to get into the debate. I simply wanted to understand what it was that happened.
DR. SVARSTAD: I appreciate that and, as I recall, one of the difficulties for us in implementing this, because the way I saw our role it was not to take a position on this controversy; it was to try to implement a consensus panel. So, I think when there are issues of controversy or lack of clarity, these issues need to be determined ahead of time so that we could have some guidance if it does occur. Unfortunately, I don't remember it ever occurring. So, I guess we were fortunate that it didn't come up this time, which doesn't answer your concern.
MS. POWELL: Yes, that may have been a function of the product selection.
DR. SVARSTAD: Right, exactly.
MS. POWELL: Thank you.
DR. SVARSTAD: You are welcome.
MS. DAY: I am Ruth Day, from Duke University, and I have two much easier questions. The first one is you did alert all the pharmacy managers that somebody would be coming in a particular period of time --
DR. SVARSTAD: In Wisconsin.
MS. DAY: -- in Wisconsin. In the other states, did they not know anything?
DR. SVARSTAD: No.
MS. DAY: That is great; that is great. My second question has to do with the leaflets or the written information examples that you did get. You showed us a nice array of examples, but I am trying to get an idea out of the hundreds of pieces of paper that came in, how many of them appear to have come from comparable sources. I know that sometimes you can identify particular vendors and sometimes not, but if a content analysis were done you would find there are a certain number of types.
DR. SVARSTAD: Of course.
MS. DAY: So, I am interested in the type-token ratio about how many different types -- were there ten types, and then some had hundreds and hundreds and some only had one or two? Can you comment on that, please?
DR. SVARSTAD: Unfortunately, I cannot, Ruth, because we did not analyze that. If you look at the tables, we had over 700 pieces of information come in, and there is no doubt in my mind that we could do that in a month or less, but you do need to have some criteria for doing that and you have to have some people simply to sit down and look at them. When these came in we were getting in -- you know, in a couple of months we had three states coming in. So, we were processing them and sending them off. So, this was never really part of the study. But I fully appreciate that it can be done, and would like to do it.
MS. DAY: And then look at the rating results as a function --
DR. SVARSTAD: Yes, but the only concern I have is that you are not always sure if the vendor was responsible for what you got.
MS. DAY: Well, aside from trying to decide who the vendors were --
DR. SVARSTAD: Yes. Let's be honest here. I think, you know, the number of vendors here I can put on both of my hands. You know, I think there are very few vendors that were responsible. You know, there was a number where it wasn't clear where it came from. But I think we all know who the major vendors were, but I could not comment for you -- I would not want to give you any impressions that vendor A is getting this kind of rating versus vendor B and C.
MS. DAY: No, I wasn't interested in that explicitly --
DR. SVARSTAD: Yes.
MS. DAY: -- but what percentage of all of the are piling up on a few, and if you could just look at those, and all the others are adding noise to your data -- just a preliminary look I thought.
DR. SVARSTAD: No. I hope you appreciate what is involved in just having them rated on the content, but it is something we can do. We certainly can.
The question, I suppose, for this week really that I would turn back is, is this an important question and, if so, what kind of information would be useful on the next evaluation because that is, of course, where we are headed. I would assume that is where. So, if vendor differences are important, how would you do that and how would you justify that, or what would you get out of it? I would think some discussion of that would be useful.
MS. DAY: Well, just on the surface, you could have those who have all of the side effects categorized by severity and frequency of occurrence, or just a sprinkling of some, and then just categorizing on that one variable that is interesting and look at the ratings.
DR. SVARSTAD: There is no question that you could simply use the scales that we did have to do this, but I think what I am asking is what do you want to know about the vendors, and how would we do this in a way that is fair to the vendors, and how would you release that information? Coming from a university background, I am very sensitive to releasing information that reveals the identity of anyone, including a patient, a pharmacist, a pharmacy, a chain or a vendor. I am, frankly, a little stumped on how we would this but if you have some ideas I think that would be --
MS. DAY: I will just conclude by saying setting that vendor issue aside and all the political aspects of that, just looking at features, taking a feature approach and then looking --
DR. SVARSTAD: Sure.
MS. DAY: Thank you.
DR. SVARSTAD: Yes, you are welcome.
DR. OSTROVE: I would just like to give you a little bit of background, for those especially who are asking about the kind of analyses that were done. Basically, it really does come down to a matter of logistics in terms of this particular study. We wanted to make sure that we could get the study done and get the results out to the public for your comment in enough time so that the vendors, whoever they may be, would have time to make whatever changes they saw fit on the basis of the study, even if the specifics of the study did not address their specific pieces and, as Bonnie points out, in many cases there are options that the pharmacist has to print out pieces of it and not the whole thing.
We wanted to make sure basically that this information was available in a timely fashion, and we were limited in terms of the funding. We were graciously provided funding by the ASPE, by the Assistant Secretary for Planning and Evaluation, and it was only a certain amount and we wanted to make this sure that we had this done. So, it is not that Dr. Svarstad didn't want to do it. Believe me, she definitely wants to do these analyses but we could only fund certain kinds of analyses and we had to make sure that the study was done in a timely fashion. So, just keep that in the back of your mind.
As a result, I think some of the questions kind of derive from that, some of the questions that we are asking you. Given now that we have all this out, let's hear all of these concerns. Certainly, I think Bonnie would love to collaborate people with regard to some of these additional analyses.
MS. ALLINA: I am Amy Allina, from the National Women's Health Network. I have another question about scientific accuracy. Looking internally at the panel's own evaluations because it felt to me, reading it, like there was some internal inconsistency --
DR. SVARSTAD: I don't think on accuracy; there wasn't really.
MS. ALLINA: Well, let me explain why I say that.
DR. SVARSTAD: Sure.
MS. ALLINA: To have the panel say that, for example with ibuprofen, there was 83 percent scientific accuracy and, at the same time, having them say 59 percent had low adherence on contraindications, or 40 percent low adherence on precautions, to me --
DR. SVARSTAD: It seems inconsistent --
MS. ALLINA: Exactly, right.
DR. SVARSTAD: Yes, I think in answer to your question, there really are two questions here. One is to what extent is a contraindication presented or included at all, regardless. Then, I think the other side of it is whether or not that information is up to date. So, I suppose you could say that the information sheet includes a contraindication but it is something that is not consistent with contemporary knowledge, whether you are talking about clinical knowledge or FDA knowledge.
When we discussed this, my personal view on this is that this is kind of redundant. I mean, it seems to me that if you put down what are the approved indications, you put down what are the approved or accepted side effects, however you define that or however far you want to go with that, and then you also say scientifically accurate, I personally -- and that is just my personal opinion -- it seems a little redundant.
MS. ALLINA: It seems to me it could be redundant but you have different numbers.
DR. SVARSTAD: Well, but you see, there are two different issues here. There really are two different issues because as the form represents it, the question is whether or not the information was presented at all. Then, of the information that is presented, to what extent is it accurate?
MS. ALLINA: Well, from a consumer perspective, the information isn't accurate if it is not complete.
DR. SVARSTAD: Well, but you see, that is why we are trying to develop a form where you make clear what you mean by completeness --
MS. ALLINA: Sure, I understand that.
DR. SVARSTAD: -- specificity and accuracy. I appreciate where you are coming from, from a consumer's perspective, because, not being a pharmacist, I have probably been there. But I think there are a number of dimensions here -- specificity, completeness, accuracy, legibility. All four of those dimensions could be used to evaluate the same sentence, and I think that is what we were trying to do.
MS. ALLINA: It is very interesting data. Thank you.
DR. SVARSTAD: Yes, thank you.
MS. COHEN: A friend just told me I should give my real name but I will.
MS. COHEN: You know it anyway.
DR. OSTROVE: We know you!
MS. COHEN: That is my problem! My name is Susan Cohen and I am a consumer advocate, and I have served on an advisory panel for the FDA. I have to tell you I have a lot of emotional reactions to what just happened. It is a panel of experts but consumers are experts in their own way too.
DR. SVARSTAD: I agree with you.
MS. COHEN: And, I feel very hurt that consumers were not really included. If I may make some suggestions, I think the food label has been one of the most successful things for consumers. Let's have a label on medication. Just tell consumers exactly right away what they can learn and what they cannot learn.
I also think that every consumer should get a copy of their prescription so they can compare it against the medication they receive. I think consumers should get a list of generic questions that they should know how to ask the pharmacist, if they can ever see a pharmacist. Now, I can tell you that the pharmacy I go to is so busy that all I see is a clerk, and many consumers do that. Many pharmacies are under-staffed, and there really should be a ratio of the number of prescriptions filled to the number of pharmacists.
So, I think there are some real problems for consumers. How are consumers reassured that they get the right medication? There is a lot of trust involved, but there should be something more that could tell them. Are there instructions to discard old medication?
DR. SVARSTAD: I am sorry?
MS. COHEN: Discard -- that is my Boston accent; I don't know how else to tell you -- to throw away --
-- to throw away medication. Did you go into a very busy pharmacy to see how busy it is and what kind of attention people get? I was concerned that it wasn't multi-cultural, from what I can gather, in terms of your people going out. I mean, one of the things that the FDA encourages is that when they do any clinical trials they be multi-cultural.
I just think that not to have really included consumers because they are the ones that really have to face the problem -- because I am capable of doing something, it doesn't mean someone else is. So, with due respect to the Keystone and the study, I am profoundly troubled, I have to tell you, and I hope that you will be able to deal with consumers and talk to all of these wonderful consumer organizations, and I see some of the people here, and let people speak for themselves.
I have to tell you my husband is a scientist so I am not in any way downgrading the people on the panel, and I am truly troubled, as you can tell, and I did give my true name.
DR. SVARSTAD: Well, let me try to just respond to a couple of things, and I appreciate your comments. First off, I think I agree with you, as I stated, that in my personal view the lack of consumer input is the major study limitation. It was not our choice. So, I agree with you and I would like some support for that. I have spent my career being a consumer advocate, and that is why I am interested in this so I am in total agreement with you. I am not sure that I want the consumer evaluating the accuracy of information. When I said before my personal opinion, we all have opinions about this, but we all have roles in this but I would not want to be the one -- I have glaucoma. So, I am a consumer of anti-glaucoma agents. Frankly, I do not want to evaluate the accuracy of my glaucoma information; I don't know about you.
But, like you, I do want an input, and I don't want a paternalistic or maternalistic approach to it. So, I think we have to be very sensitive to that. But I would like to see some kind of multiple role, multiple perspectives in this. So, I am in agreement with you on that. I think we just have to think of ways to do that the next time around.
Did we go to busy pharmacies? Yes. Did we keep track of whether they were busy? Yes. I am not presenting that data yet because that, again, wasn't part of the contract but we are going to check to see whether or not where there were some effects of business. That is, of course, somewhat hard to do because we were not allowed to collect any information about all these pharmacies.
The third thing I think that you said is that you go to your pharmacy and you don't get information. I have two suggestions for you, if I may offer them. One is, you may have told your pharmacist of your unhappiness. I think they need to hear that. Then if they don't, go to a different pharmacy. It is not easy but I do think consumer expectations for a pharmacy have to include good information, both oral and written, and we have to communicate that to them. So, I am with you on that point. It is not easy though.
Whether or not there should be standards about staffing, I think that is an issue for another workshop, but I am with you on that one too. Go ahead, Pat.
MS. BUSH: Pat Bush, from United States Pharmacopeia, and I have been trying to think of where we should go from here. You have done a fantastic job in contributing to this evaluation of leaflets in terms of these ten criteria. I know how hard that kind of a study must have been.
So, where can we go next? How can we get these kinds of things together? If we say that useful is the ultimate good -- what are these leaflets for? They are for usefulness to patients. Right? If we can define that, then it seems to me what we need to do is take each of these criteria and format and see how much does each one of these contribute to usefulness. How can each one be made optimal? How can format be made optimal to contribute to usefulness? That should be the driving force. That is all. That is not a question.
DR. SVARSTAD: Well, no, I think it calls for lots of input and lots of discussion here as to what, in fact, is useful, and I know that is what the Keystone panel was trying to do, but this is not simple. I agree with you and I think we just need to discuss it and keep discussing it.
My understanding of this or my view on this is that I think this is going to evolve. You know, our definitions of usefulness, our definitions of accuracy, our definitions of completeness are likely to evolve, and that is the way it should be. I don't see how we can do this all in one event. It is too complicated. Yes, ma'am?
MS. HARE: My name is Doris Hare, and I am with the American Foundation for Maternal and Child Health. My concern is that all of these inserts should have a very clear warning regarding use during pregnancy, and I would like to see us say, use in pregnancy: none of the drugs approved by the FDA has been subjected to a follow-up to determine whether or not a drug may result in delayed long-term adverse effects on the child exposed in utero to the drug. That is an absolute, undeniable fact, and women have a right to know that.
I mean, this past season we have had tremendous experimentation, I am sure, with the flu season and everyone taking different kinds of drugs and, yet, there is not even an attempt to evaluate the effect of these drugs on the baby. Thank you.
DR. SVARSTAD: Thank you.
MR. NESBITT: I have a methodology question. Scott Nesbitt, with Health Resource. Did you weight any of the results based on the size of the pharmacy at all?
DR. SVARSTAD: No.
MR. NESBITT: That might be something that we want to consider given that with pharmacy A and pharmacy B one does 100 scripts a day and one does 600 scripts a day. As your pilot study showed, if you go in a second time you get the information -- you didn't need to do more than one visit. So, in reality, if you get information from the larger pharmacy a lot more people are getting that information. So, the 87 percent of people getting information is really 87 percent of pharmacies are distributing information and that might understate the actual number of people that are getting information.
DR. SVARSTAD: I think there are a lot of questions that you could ask about the characteristics of the pharmacy in addition to size. One is availability and capacity of their technology, their computers, their printers, their staffing, etc. So, that is a whole study in itself, and a very important one but one that we couldn't do without that information. We had no way of determining what the volume was.
MR. ROSS: John Ross, Ely Lilly and Co. I was just curious how many were more than one page long? What percentage?
DR. SVARSTAD: We did not evaluate number of pages.
MR. ROSS: Okay.
DR. SVARSTAD: I would say the overwhelming percentage of all information sheets -- I only remember one that exceeded one page.
MR. ROSS: Thank you.
DR. SVARSTAD: Maybe there were more than that; maybe ten, twelve. Most of them were one sheet.
MS. SMITH: Dorothy Smith, with Consumer Health Information. I just want to congratulate you on this first step, this study. I think it really gives us a lot of good information.
I am listening to the comment from the consumers and health professionals, and what I am hearing is that they want useful information. I agree with you totally that written information should never replace the verbal or the oral information, and I am wondering if one of the next steps -- maybe you looked at this, the behavior modification techniques that might have been used in some of the writ ten instructions. You mentioned briefly color. I am wondering did any of these instructions address specific patient compliance problems unique to that drug or disease, or patient population? And, did any of them take medical illustrations to describe the condition, to help the person look at a picture?
DR. SVARSTAD: We did not count the number that had any illustration but I, frankly, don't recall any.
MS. SMITH: Okay, I think that may be an area in the future.
DR. SVARSTAD: Yes, that is one of your questions. I think if we had been looking at a glaucoma agent or an asthma inhaler we would see a very different result on that, Dorothy.
Then, your other question related to adherence --
MS. SMITH: Behavior modification.
DR. SVARSTAD: Maybe you should clarify how you are using behavioral modification.
MS. SMITH: I am thinking that when you write the instructions, or before you even start, the people with depression are going to have different needs than the people receiving ibuprofen, or whatever, and when you start writing the patient leaflets you can be more effective than just describing a list of instructions about side effects and how to take the drug. If you can give them some information about the disease and just guide them and convince them to take the medication.
DR. SVARSTAD: Convincing them to take the medication, or about the disease -- I think I earlier made the comment, and I would reinforce again that I think in general we didn't see as much information about benefits and I would fit that into the category. I think the Keystone report seemed rather silent. I know it said that there should be discussion about the benefits and how the patient can maximize the benefits but I think it was rather silent about the amount of information and didn't say anything that it is important to discuss the odds of having benefit. I have been kind of intrigued with what would happen if we started doing that because, as a consumer, I think it would be useful in making a decision whether or not to initiate therapy if you know something about the benefits and the chances are that you will benefit from it. But that wasn't included. So, we did not add that. We tried to adhere to the Keystone criteria.
On the compliance, I am not sure -- I am thinking, for example with paroxetine, my understanding of the literature is that one of the main reasons that persons with depression discontinue paroxetine or any other antidepressant, for that matter, prematurely is that they do not fully understand that benefits are not likely to occur until after some weeks. That was included in the evaluation form.
MS. SMITH: Or they develop side effects.
DR. SVARSTAD: Yes.
MS. SMITH: Like with the antihypertensives, 50 percent of the population stop taking them in the first year.
DR. SVARSTAD: Yes.
MS. SMITH: So, it depends on the disease and the drug and the population.
DR. SVARSTAD: Yes.
MS. SMITH: But I just really think that your study has really helped a lot in the fact that you have pulled out the comprehension as one of the areas that we just need to look at a little bit more.
DR. SVARSTAD: Yes, thank you.
DR. OSTROVE: Any other questions? If not, I think we ought to give a hand to Dr. Svarstad --
Well, certainly the interest level is still high. We really appreciate all these comments, and I am trusting that at least 95 percent of you are going to be here tomorrow for the small group breakout sessions where we can talk in more detail about essentially how we want to deal with these results in terms of what we are going to do the next time around, that is, for the evaluation in the year 2000.
I just wanted to let you know that tomorrow morning there will be a Continental breakfast at 8:30. We will be starting promptly -- approximately promptly at nine o'clock. You know how it goes around here; you never start on the dot. Dr. Janet Woodcock, the Director for the Center for Drug Evaluation and Research, will be addressing us at that time. We will be moving to the breakout rooms at about 9:30, and in those rooms -- locked, no refreshments -- I am joking -- from 9:45 till noon -- there will be a break. I think the way we will probably set it up is that when you need a bio-break, just go and take it, rather than having set breaks because you know what happens with set breaks -- you set it for ten minutes and it goes to twenty minutes and we do have a lot of information that we would like you to deal with in a short period of time.
We will have a buffet lunch at noon, and we will be reconvening to discuss the results at around 1:15, and then finishing by three o'clock. If today is any example, we will finish a little earlier than that but I am not promising anything.
Now, important reminders: Please bring your badges because the secret to your small group breakout sections is on the badge. Also, I would suggest that you bring your packets. The facilitator should have a new list of the ten criteria but we feel it would be helpful for you to have access to all the information in the packet in case you do want to kind of consult with it. A number of us will be moving around from group to group and we will kind of serve as information people, if you need that.
Again, thank you very much for coming. We really appreciate your all being here, and we hope to see you all tomorrow. Take care.
[Whereupon, at 4:55 p.m. the proceedings were recessed until 9:00 a.m., Wednesday, March 1, 2000.]