Skip Navigation
  • Text Size: A A A
  • Print
  • Email
  • Facebook
  • Tweet
  • Share

Blood Safety Transcripts

DEPARTMENT OF HEALTH AND HUMAN SERVICES
ADVISORY COMMITTEE ON BLOOD SAFETY AND AVAILABILITY

Tenth Meeting

Errors and Accidents in Blood Administration:

What Can Be Done to Reduce Their Occurrence?

Wednesday, January 26, 2000

1:03 p.m.

Hyatt Regency Capitol Hill

400 New Jersey Avenue, N.W.

Washington, D.C. 20001

PARTICIPANTS

 

Arthur Caplan, Ph.D., Chairman

James P. AuBuchon, M.D., Acting Chairman

Stephen D. Nightingale, M.D., Executive Secretary

Larry Allen

Michael P. Busch, M.D., Ph.D.

Richard J. Davey, M.S.

Ronald Gilcher, M.D.

Edward D. Gomperts, M.D.

Fernando Guerra, M.D.

William Hoots, M.D.

Dana Kuhn, Ph.D.

Karen Shoos Lipton, J.D.

Gargi Pahuja

John Penner, M.D.

Jane A. Piliavin, Ph.D.

Marian Gray Secundy, Ph.D.

John Walsh

Jerry Winkelstein, M.D.

Ex Officio Members

Mary E. Chamberland, M.D.

Jay Epstein, M.D.

Lt. Col. Fabrizio Saraceni

Paul R. McCurdy, M.D.

CONTENTS

AGENDA ITEMPAGE

Welcome, Roll Call, Introduction of New Members 5

Conflict of Interest Statement 7

An Overview of Approaches to the Management of

Human Error

Comments by Assistant Secretary for Health and

Surgeon General David Satcher, M.D., Ph.D.,

Department of Health and Human Services 10

Comments on "To Err Is Human: Building a

Safer Health System" (Institute of Medicine,

Washington, D.C., (November 30, 1999) -

Kenneth Shine, M.D., President, Institute of

Medicine, National Academy of Sciences 19

Human Error and Aviation Safety, Robert

Francis III, Immediate Past Vice Chairman,

National Transportation Safety Board 65

Forgive and Remember: A Historical Perspective

on Human Error in Medicine - Charles Bosk,

Ph.D., Professor of Sociology, University of

Pennsylvania 98

Cultures of Conscious Inquiry: An

Organizational Approach to Error Management-

Ronald Westrum, Ph.D., Professor of Sociology,

Eastern Michigan University 136

Mandatory and Voluntary Reporting of Transfusion Error

Human Error and Transfusion Medicine -

Jeanne Linden, M.D., M.P.H., New York State

Department of Health 163

Mandatory Reporting to Regulatory Agencies-

Sharon O'Callaghan, MT(ASCP), Consumer Safety

Officer, Food and Drug Administration 231

CONTENTS

 

AGENDA ITEM (Continued) PAGE

Construction of a Medical Event Reporting

System for Transfusion Medicine - James

Battles, Ph.D., Professor of Medical

Education, University of Texas Southwestern

Medical Center 257

Implementation of a Medical Event Reporting

System for Transfusion Medicine - Harold

Kaplan, M.D., Professor of Medicine, Columbia

University 271

Adjournment 302

PROCEEDINGS

DR. NIGHTINGALE: Good afternoon. It's 1:03 p.m. My name is Dr. Stephen Nightingale, and I am the Executive Secretary of the Advisory Committee on Blood Safety and Availability, and I would like to welcome you all to the tenth meeting of the Advisory Committee.

I'd like to begin by calling the roll. I know that Dr. Caplan is on the train coming down. Dr. AuBuchon has agreed to chair the meeting this afternoon because of Dr. Caplan's unavoidable delay.

Mr. Allen?

MR. ALLEN: Here.

DR. NIGHTINGALE: Dr. AuBuchon?

DR. AuBUCHON: Here.

DR. NIGHTINGALE: Dr. Busch? Dr. Michael Busch is in transit at this time.

Dr. Richard Davey?

DR. DAVEY: Here.

DR. NIGHTINGALE: Dr. Gilcher?

DR. GILCHER: Here.

DR. NIGHTINGALE: Dr. Gomperts?

DR. GOMPERTS: Here.

DR. NIGHTINGALE: Dr. Guerra?

DR. GUERRA: Here.

DR. NIGHTINGALE: Dr. Haas is unable to be here today because of transportation difficulties.

Dr. Hoots?

DR. HOOTS: Here.

DR. NIGHTINGALE: Dr. Kuhn?

DR. KUHN: Here.

DR. NIGHTINGALE: Ms. Lipton?

MS. LIPTON: Here.

DR. NIGHTINGALE: Ms. Pahuja?

MS. PAHUJA: Here.

DR. NIGHTINGALE: Dr. Penner I believe is still in transit and is hoping to be here. Dr. Piliavin is in transit, also.

Dr. Secundy?

DR. SECUNDY: Here.

DR. NIGHTINGALE: Mr. Walsh?

MR. WALSH: Here.

DR. NIGHTINGALE: And Dr. Winkelstein?

DR. WINKELSTEIN: Here.

DR. NIGHTINGALE: We do have a quorum of the voting members of the committee. The ex officio members of the committee: Dr. Chamberland?

DR. CHAMBERLAND: Here.

DR. NIGHTINGALE: Dr. Epstein? Is Dr. Epstein--he is on his way, I understand.

Major Saraceni is representing Colonel Fitzpatrick. Major Saraceni, you are present?

LT COL SARACENI: Present.

DR. NIGHTINGALE: Dr. McCurdy?

DR. McCURDY: Here.

DR. NIGHTINGALE: Dr. Goosby is unable to be here today. Is Dr. Snyder here yet?

[No response.]

DR. NIGHTINGALE: Again, I believe we have a quorum. Very briefly, I must read the conflict of interest statement.

The following announcement is made as part of the public record to preclude even the appearance of a conflict of interest at this meeting. General applicability has been approved for all committee members. This means that unless a particular matter is brought before this committee that deals with a specific product or firm, it has been determined that all interests reported by the committee members present no potential conflict of interest when evaluated against the agenda.

In particular, specified in Title 18 U.S. Code 208(b)(2), a special government employee, which all committee members are, may participate in a matter of general applicability--for example, advising the government about its policies related to the hepatitis C epidemic--even if they are presently employed or have the prospect of being employed by an entity, including themselves is they are self-employed, that might be affected by a decision of this committee provided that the matter will not have a special or distinct effect on the employee or the employer other than as part of the class.

In the event that discussions involve a specific product or a specific firm for which a member has a financial interest, that member should exclude him- or herself from the discussion, and that exclusion will be noted in the public record.

With regard to the other meeting participants, we ask in the interest of fairness that they disclose any current or previous financial arrangements with any specific product or specific firm on which they plan to comment.

And, with that, it is my privilege to turn the meeting over to Dr. AuBuchon, who will introduce Dr. Satcher.

DR. AuBUCHON: Thank you very much. I appreciate the opportunity to chair the meeting briefly this afternoon in the Chairman's absence, and I appreciate everyone's efforts to attend this meeting despite Mother Nature's best efforts to keep us from having it. It was an interesting effort on everyone's part to get here, and perhaps at the meeting's conclusion we can all retire to the lounge and discuss the stories from yesterday or this morning.

I particularly appreciate Dr. Satcher for rearranging his schedule, a very busy schedule, to speak with us this afternoon. We appreciate the amount of interest and attention that this committee gets at the highest level in the Federal Government, and we particularly appreciate his time and effort in listening to us. And so I appreciate his coming to share his comments today.

Dr. Satcher?

DR. SATCHER: Thank you very much, Dr. AuBuchon, and members of the committee. I also want to thank you for being here. It's amazing how much is getting done while the government is officially closed. The government was closed yesterday, and we launched Healthier People 2010, which is the health plan for the nation for the next ten years. And we have that conference going on over at the Omni Shoreham. It will go on through Friday. A lot of excitement.

This morning the first recommendations from the Community Prevention Task Force, which have been published in the Journal of Preventive Medicine, were released, and there's quite a group of people there despite the weather. So, again, I'm grateful to all of you who have weathered the storm to be here and to continue to contribute to our efforts.

Over the next two days, you will consider ways to prevent errors and accidents in blood administration. This is an extremely important topic in transfusion medicine, an extremely important topic to our Department of Health and Human Services. Errors and accidents that cause serious transfusion reactions are rare, but they are not as rare as we would like them to be, I can assure you.

Over the last decade, we have expended considerable effort to reduce morbidity and mortality caused by transfusion-transmitted diseases in blood and blood-product recipients. And the results of those efforts, as you know, have been quite gratifying.

At this meeting, we need to review the status of our past and present efforts to reduce the morbidity and mortality caused by human error, and we need to determine what additional measures we need to take.

I want to assure you that your recommendations will receive our prompt and careful considerations, and most of you know by now that even when we're not in the room here, we're still listening to you, because we have some of our best people from our agencies who participate in the entire meeting.

I appreciate the expertise that the members of this committee bring to this important task, and I'd like to just take a minute and welcome the new members of the Advisory Committee: Dr. Richard Davey, the chief medical officer of the American Red Cross, welcome; Ms. Karen Lipton, the president and CEO of the American Association of Blood Banks; Ms. Gargi Pahuja, a patient service representative of the Cooley's Anemia Foundation; and Dr. Jerry Winkelstein, the medical director of the Immune Deficiency Foundation and professor of pediatrics at Johns Hopkins. Welcome to the committee.

I would also like to thank Dr. Gilcher, Dr. Hoots, and Mr. Walsh for agreeing to serve another term.

We also appreciate the expertise of those who have come to testify before this committee. Dr. Kenneth Shine, the Institute of Medicine, has done a major public service by raising awareness of the issue of human error in the practice of medicine. We certainly share your concerns, and this meeting reflects our commitment to begin to try to translate some of those shared concerned into effective action. So we look forward to your presentation.

Mr. Francis of the National Transportation Safety Board has helped the aviation industry to make great improvements in safety for its passengers. And we look forward to hearing your review of how this has been achieved and certainly learning how your experience can guard our efforts.

Dr. Bosk, of course, in his book "Forgive and Remember," reminds us that the medical profession has a long and noble history of attempts to reduce error by practitioners, beginning, I guess, with Hippocrates. We need to recognize these accomplishments even as we work together to improve upon them.

I'm sorry that Dr. Helmreich, who has been a leader in the implementation of safety measures in the aviation industry and a leader in the application of this process through the practice of medicine, will not be here today. But I am glad that Mr. Francis will be able to discuss some of that work.

Dr. Westrum, whom I understand is on his way but not, I guess, yet in the building, has written about organizational cultures of conscious inquiry that seem less prone to errors and accidents.

The Institute of Medicine, again, has challenged us to identify characteristics and factors that enable or encourage providers and health care organizations, health plans, communities to continuously improve the quality of care. So promotion of a culture of conscious inquiry such as described by Dr. Westrum would appear to be an important part of our response to this challenge. So we look forward to hearing how together we might accomplish this.

This afternoon the committee will consider the merits of both mandatory and voluntary reporting systems in the field of transfusion medicine. Dr. Linden will describe and update the New York State experience with mandatory reporting of transfusion errors and accidents. Ms. Greene was held up in the Midwest, I understand, but Ms. Reardon and Ms. O'Callaghan will discuss how the FDA administers its statutory reporting mandates and also how industry responds to them.

Dr. Battles and Dr. Kaplan will then discuss voluntary as well as mandatory reporting and the system which they have developed, with NIH support, to facilitate the process.

On the subject of reporting and analysis of error, we encourage the committee to consider several things, but I will mention three questions. Obviously, should there be voluntary and mandatory systems for reporting errors and accidents in transfusion medicine? If a voluntary and mandatory reporting system is to exist, where should the line be drawn between when or what reporting is voluntary and what is mandatory? And to what extent is it possible or advisable to incorporate newly recommended reporting efforts into our ongoing efforts?

Tomorrow the committee will hear presentations of state-of-the-art systems for positive patient identification. Because of patient misidentification, and because it's been found to be the single most common factor in transfusion error and accidents, and because patient misidentification appears to be equally important as a cause of error and accidents in many other areas of medicine, we would be very interested in your evaluation of this area.

Finally, of course, in the public comment and committee discussion period tomorrow, we encourage all participants and guests to consider other initiatives that might reduce errors and accidents in transfusion medicine.

Once again, I look forward to receiving your recommendations on this very important topic. I thank you again for your efforts to assure both the safety and availability of the blood supply, and I assure that the Secretary and I will give very serious considerations, as you know, to your recommendations.

DR. AuBUCHON: Thank you very much, Dr. Satcher.

Do any of the committee members have questions for the Surgeon General?

[No response.]

DR. AuBUCHON: Okay. Well, thank you very much, Dr. Satcher, for taking the time to outline our task for the next several days, an important task that it is, and we appreciate that you will be listening to what we have to recommend. Thank you.

DR. SATCHER: I'm going to be here for the first few.

DR. AuBUCHON: Oh, excellent.

We will now turn our attention to some background information to help set the stage for tomorrow's discussion about error in health care. I'm reminded that there were two faculty in my medical school that were appropriately named for this discussion today. One of the faculty was Dr. Perfect, and that is obviously a goal to which we all ascribe, but we recognize that that isn't really possible. There was another physician on the faculty whose name was Dr. Goodenow, and his name was frequently mispronounced and describes the goal to which we may reasonably try to achieve, and he was Dr. Good-enough.

But the Institute of Medicine has done the country and health care in particular an excellent service by compiling much information in a very useful format for us to realize just how far we are from our goal of health care without human error, and I appreciate Dr. Kenneth Shine coming this afternoon to explain to us the basics of the report and what we can take away from it, particularly as it relates to transfusion medicine.

Dr. Shine is the president of the Institute of Medicine of the National Academy of Sciences. Dr. Shine?

DR. SHINE: Thank you, Dr. Satcher, Mr. Chairman, members of the committee. It's a pleasure to be with you, and I will try in a brief few minutes to give you an overview of our report.

I just want to remind you that the Institute of Medicine is not part of the government. We are an independent 501(c)(3) organization that does our work largely through volunteers, volunteer committees, as in this case, and we pride ourselves on our efforts to maintain an approach which is a deliberative, evidence-based, nonpartisan approach to issues related to health.

I think you may or may not be able to see these slides well, but I hope all of you have copies of the slides in front of you, and you can see what they say.

The title of our report is not fortuitous. The concept of "To Err Is Human: Building a Safer Health System" was based to immediately convey the notion that although negligence in health care, poor training, failure of individual performance, should always be sought out and that real malpractice deserves to be identified as that, no system involving humans is free of error, and that if we are going to respond to the problems of error, we need to build a system or systems of care.

It is critically important that this is not an exercise in fingerpointing. This is not the mistake of the doctor or the pharmacist or the nurse or any single individual but, rather, a failure in many of our institutions and in the practices--and although I'm going to restrict myself primarily to hospital practice, we believe these same issues attain to ambulatory practice--that all these areas require serious attention to systems of care.

I should point out that this activity is part of an overarching activity that began several years ago in which the institute chose to focus on quality of health care in America, and this is actually the third report in this area.

In the first of these reports published in JAMA in the fall of 1997, there were two points that I think are particularly relevant for this group:

First, that there is a substantial gap in the United States between average care and best care, and that that gap is not a problem simply of rich and poor, it is a problem of care in all of our institutions, and some of our wealthiest citizens in the wealthiest institutions do not get the best care. Closing that gap, for which we have major evidence, is critical.

And, secondly, this is not about managed care. Our studies have shown that in some areas quality of care is better in managed care; in other areas, it's worse. But to try to blame the cost containment effort as the cause of errors, for example, is not going to be terribly productive. It may have problems for other reasons, but this is a problem of our entire health care system.

Earlier in 1999, we brought out a report on quality of cancer care, which confirmed for cancer many of the problems we found in other areas, and also emphasized that there is a real relationship between outcomes and volume of services. And this is, again, a major issue for quality in this country. Doing something occasionally doesn't produce the same quality of care as doing something regularly and with a good system of care.

The Quality of Health Care in America Committee is chaired by Dr. Bill Richardson, and I don't expect you to read all the names on this list, but the notion that I want to convey is: A, it's a group of volunteers; and, B, it represents all segments of the health care system from a whole variety of perspectives. It also includes patients, it includes individuals who are purchasers of care from industry.

This committee looked at this issue from a very wide set of perspectives, and the fact that they could come up with a unanimous report and that their conclusions and recommendations were agreed to by all of the players is extremely important.

Errors was a particularly important area to start. The burden of injury is high. It's understandable to providers and consumers. Most errors are avoidable. Those of you who are familiar with what's going on in industry know that Motorola set a goal of six sigmas, if you will, what the standard of quality for each of its products would be so there would only be a defect in one in a million or less. And we don't see any reason why we should not be striving for that kind of excellence in health care. And the experience in a variety of industries--and you're going to hear about the airline industry--suggests that if you get to work on it, you can, in fact, dramatically reduce errors.

Now, one can argue about how to predict the number of errors, but the fact is that from a whole variety of studies, we believe that errors are serious contributors to a substantial number of deaths. And we've estimated that to be somewhere between 44,000 and 98,000 deaths, which is more than from breast cancer, AIDS, or motor vehicle accidents. So we believe this is a major issue.

And I would emphasize to you, a number of people have suggested, well, these people were going to, quote, die anyway. They were very sick and, therefore, an error may have helped things along, but they might otherwise have died. We know from intervention studies, particularly in the Inter-Mountain health system, that when you put in systems which reduce the number of errors, one can produce measurable decreases in injuries, in deaths, and, of course, in costs because injuries cost more money. We believe that many people are harmed and that adverse effects occur among 3 to 4 percent of hospitalized patients and that about 1 in 10 of those adverse effects directly contributes to death and that at least half are preventable.

These are costly errors. We estimate the cost between $17 and $29 billion and that the direct health care costs are substantial, but there are a lot of indirect costs.

I'll just remind you that when you prescribe the wrong antibiotic to an acutely ill patient, the consequences of that are extraordinarily widespread in terms of the implications that that individual is going to be in the hospital longer, not only has a risk of death but has a risk of extended morbidity, the length of time, as the Inter-Mountain people showed, in the intensive care unit is substantially increased, and that preventing that decreases--in the case of one of the Inter-Mountain studies, the kind of computer system that was put into place on antibiotics reduced injuries by 59 percent and reduced costs by almost 30 percent.

Medication errors are among the most important. Two out of 100 hospital admissions experience a preventable adverse drug event and increase the average cost per admission substantially.

I think medications have been particularly singled out, but I just want to emphasize--and I think you all are aware of this--there are many, many other kinds of errors, and one doesn't have to get carried away with examples of operating on the wrong extremity or doing the wrong procedure on the wrong patient to recognize that there's a whole spectrum of errors that run from medications, which obviously include issues related to transfusions and other kinds of activities. But medications is one of the areas that's clearly one that we ought to be able to directly affect.

Among the key findings were that errors occur because of system failures and that preventing errors means designing safer systems of care.

Just for purposes of definition, we believe an error should be defined as a failure of a planned action to be completed as intended, that is, either an error of execution, or the use of a wrong plan to achieve an aim, an error of planning. It's worthwhile defining one's terms in this regard because, again, this says nothing about the competence or the credibility of the individuals involved in it. It's that the plan failed. Patient safety implies that one would be free from such accidental injury.

And, secondly, by adverse events, we mean an injury caused by medical management rather than by the underlying disease or condition of the patient, and we believe that preventable adverse events are those that are attributable to error.

Our proposal is a four-part proposal. It includes first the creation of a national center to provide leadership and research. The second has to do with reporting systems to identify and learn from errors, I know an issue of great interest to this committee. Third, setting standards for safety through regulatory and market forces; and, fourth, to implement safe strategies within health care organizations.

As far as the Patient Safety Center is concerned, we think such a center can help to set national goals and track progress against such goals; that the center ought to issue an annual report to Congress and the President, develop and fund a research agenda, and disseminate information to the medical industry and to the public about what's happening in safety.

One of the things I'm often asked is: Aren't you scaring people away from health care if you tell them about all these errors? Well, one of our goals is to empower patients so that they know that if the pharmacist hands them a prescription where the pill is a different color than the one they were taking, they're empowered to say, well, you know, this could be a mistake, I got to find out about this, as opposed to simply accepting what happens in the health care system.

There are lots of other examples of individuals who are hospitalized where the failure to question the person giving the care prevented the correction of an avoidable mistake. Patients need to understand it is not a perfect system, and providers need to understand that they have to be able to explain to patients that, in fact, they know that they're doing--carrying out activities that are safe.

As far as reporting systems are concerned, if you don't know about it, you can't fix it. And we believe that we have to create an environment in which clinical providers can talk about errors, and that patients believe that the system will be responding to safety problems. In our discussions with patients in hearings that we've had and some of the recent writings by people in this area, one of the most frustrating experiences is for a patient to identify a potential problem and then have the sense that everybody is stonewalling, that everybody is failing to pay attention to it.

You probably are aware that since we reported our--we put out our report, the VA has reported on a program at one of its hospitals in which they made the decision that they, the institution, would notify patients and their families promptly whenever an error was discovered. And according to this particular report--and, granted, this is a particular population, but according to this report, although they had to make financial settlements with a number of patients and families, the aggregate cost to the institution by being honest and forthcoming was substantially less than that which they'd experienced before in a situation in which people stonewall, an adversarial relationship develops, and huge awards take place after lengthy court systems.

We believe there needs to be a two-pronged approach to reporting, that serious injuries or death ought to be mandatorily reported, and we believe that the experience in 21 states that have such systems needs to be reviewed in some detail, and that mandatory reporting take place at the state level. On the other hand, we also believe that an attempt should be made to rationalize the nature of the reporting so that data can be compared from one state to another and so that one can understand the variations in reporting.

Obviously, we're not naive about the problems associated with liability and the malpractice lawyers who are constantly looking for litigation. The committee's analysis shows that in those states with mandatory reporting of serious injury or death that there was not a substantial increase in the amount of malpractice litigation. Obviously, what happens is if there's a serious injury or death, it's not unlikely that someone is going to find out about it outside of the reporting system and that, in fact, a suit may take place. But it remains to be seen what the overall impact of such mandatory reporting would be. We understand it's controversial, but we believe that that is a sensible way to deal with this important problem.

On the other hand, a voluntary system in which near misses or minor injury or just errors that occur without producing problems, such as the near miss, ought to be in place, that it ought to be in place in every institution, and most important, that it ought to be protected by the confidentiality of peer review.

For those of you who have been in the health care system for some time, you'll remember when morbidity and mortality conferences were held regularly by all kinds of services, surgical and medical, and errors were freely discussed. In the malpractice era, particularly in California, those activities eroded substantially because, frankly, there were people in the institution who were prepared to provide information to the lawyers about potential suits. And as a consequence, people became very anxious about that.

We have to restore the concept, which has been lost over the last 25 to 30 years, that finding out about errors is, in fact, going to produce a suit and, therefore, let's not look at it, let's not find out about it.

Incidentally, the same thing may be true of the dramatic decline in autopsies which have taken place in this country.

The mandatory reporting system, as I mentioned, we think should be state-based, it should be standardized, but not centralized. It should focus on the most serious errors, and it should provide organization-level reporting with public access to some information.

The reporting system should encourage actions of purchasers, accreditors, professional groups, with greater participation from existing programs, and we hope that the new center would fund pilot projects, and I've already mentioned the importance of peer review protections.

We believe that there needs to be the setting of expectations and standards with regard to the level of dealing with errors, that health care organizations through licensure and accreditation and purchasers through their demands should insist that there be, in fact, those systems in place to deal with errors--and I'll come back to that in a moment--that health professional licensing and professional societies should take this on as a serious issue, that the FDA through its pre- and post-marketing processes should be involved in looking at these kinds of systems.

The most important word in this title is "culture," and that is, we need to create a culture of safety in health care organizations. That means providing leadership, designing jobs that avoid reliance on memory, simplification, and standardization. I'm an active clinician. I attend at Georgetown. I take care of general medicine patients and cardiology patients. And I can tell you that, as experienced as I am, I am primarily influenced by the last four or five cases of a problem that I have seen. I can tell you that I don't know, in fact, how Dr. Jones is doing with the outcomes of a particular treatment that he's given because I may refer him two patients a year.

You can't rely on memory, and the information revolution has, in fact, made it possible for us to move to simplification and standardization. We've got to promote effective team functioning, anticipate the unexpected, and design for recovery. We have to create a learning environment where everyone understands--and I think, again, this was one of the great strengths of the Inter-Mountain program, where everybody learned just how frequent these problems were, and then they learned how to fix them.

This is an issue of great interest to the American people. Fifty-one percent of the American public, according to the Kaiser Family Foundation--and this is their language--have been closely following the media coverage of our errors report. The President has issued an Executive Order to government agencies with regard to response to this, and as you know, there have been congressional hearings. And the Business Roundtable has undertaken an initiative.

One of the most interesting activities is the so-called leap-frog activity, which is a group of eight corporations, which includes General Motors. And what they have done is said to their managed care organizations and other groups that they purchase care from: We have three areas that we want to see plans developed in the next year implemented by your plan if you want to continue to take care of our employees. One of those is we want to see automated, auto entry of every order, including all prescriptions, in your institutions. You don't have to implement that in 12 months, but you have to show us an orderly plan in order to do it.

Incidentally, and this is not covered by the report, but there is no reason why physicians in their offices or even as they're walking around, given current technology, cannot be writing prescriptions using their computer or their Palm instrument as opposed to scrawling prescriptions. And the point is that we need to change the culture so that, in fact, those kinds of things take place.

May we have the lights, please?

This is a brief overview of the report. We anticipate that there will be multiple bits of legislation; we know of at least four bills that will be introduced into the Congress. We know that the administration is trying very hard to figure out exactly how it's going to position itself with regard to some of these issues. Our responsibility is to provide the data and to make some recommendations which we hope will produce some action.

I'll conclude by saying that it's a particular pleasure for me to talk to this committee. A number of years ago, some of you will recall a report we did on HIV and the blood supply in which report we identified a whole series of issues with regard to the oversight of the blood supply, characterized what we thought were the important responsibilities of this group, and also the importance of leadership in the Department of Health and Human Services. We were very concerned that in the HIV epidemic in the '80s that there was not clear, unitary responsibility with regard to issues of the blood supply, and I think it's with some gratification that we recognize that the Assistant Secretary has, in fact, that responsibility and that he has followed up by maintaining his interest in this activity. So from our point of view, I have a particular pleasure in talking to a group that in many ways is carrying out a series of functions that we thought four years ago were critically important to the country.

Thank you very much, and I'd be happy to answer questions.

DR. AuBUCHON: Thank you very much for your presentation.

If any of the committee members have questions, please make sure that you press the button on your microphone so that we can all hear your question.

I was struck, Dr. Shine, that you mentioned many issues, many aspects of the culture that really have to do with quality assurance systems, and I can't speak for medicine in general, but certainly in transfusion medicine, many of us have spent a lot of time in the last several years trying to develop quality assurance, quality improvement systems in order to create that kind of culture of which you're speaking that will look for errors and not shoot the messenger.

Do you have any comments about quality assurance systems in medicine in general and how they can be applied to help reduce error?

DR. SHINE: Your point is well taken. I tried to make the point that this error report is part of an overarching series of activities with regard to quality. We will be publishing three more reports in the year 2000, one of which will be what we call the CHASI report--it doesn't have a final report yet--which is, What is the infrastructure you need for a quality health care system in terms of both local and national application? And we're working very hard on that aspect of it.

Secondly, we'll be bringing out a report on information systems in support of a comprehensive quality program. We'll be bringing out a report on communications about quality, to patients, to purchasers, to providers, and others.

So there is no question that we see the necessity for an integrated system, and it just happened that this was one of the reports that particularly captured people's attention.

One of the biggest problems we have at the present time--and it's one that I'm personally very interested in--is that what's happened is that many physicians see the attention to quality as being primarily a response to attack from the outside. They see report cards and all of this other stuff that they've heard about as outsiders trying to tell them how to practice medicine or whatever.

One of the key issues is whether professional societies and organized medicine can move from that kind of notion, recognizing that those outside pressures are going to persist, and begin to talk within their professions about the notion that continuous quality improvement is, in fact, part of their responsibility in terms of the way the health care system works.

A few medical schools now have begun to put quality assurance into their curriculum--not many. It's beginning and we're encouraging that activity. And it's our intent to continue to carry on a conversation through--in fact, the meeting that I'm referring to is a meeting about a week ag in which we had representatives of major corporations who want to work with professional societies so that when they set quality standards in purchasing care, they're actually talking to providers about how to develop that in a way in which the providers feel they have a stake in improving the quality.

The reality, as you and I know--I made this comment I think to Mr. Francis before--health care in this country consumes one-seventh of the gross domestic product, but it's a cottage industry. Some of the cottages are huge. They have all kinds of heavy equipment in them, that we use science extraordinarily well in taking care of individual patients. But the health care system itself is not a well-organized system, does not have the kind of infrastructure for information, and certainly has not had continuous quality improvement as one of its fundamental goals. And I think that's part of the wake-up call that we're trying to send, both to the professions and to the purchasers and to the patients.

DR. AuBUCHON: Possibly in transfusion medicine we have a bit of an advantage in that the system is a little more closed, has fewer players, and there is already an established reporting system between blood supplier and blood transfuser and closer follow-up of the recipient than might happen in many other health care situations. It would seem that you have a daunting task in trying to set up a quality assurance system that might include, as in your example, an independent pharmacist who's totally disconnected with the health care system or the physician who wrote the medication that was either incorrectly prescribed or incorrectly read off.

DR. SHINE: Well, in reality, with modern information systems, there is absolutely no reason why the physician cannot be putting that order in in a form that gets to that pharmacist, or whatever pharmacist the patient wants it to go to, in a form that is, in fact, a legible, printed form.

You know, my great--I mean, one of the reasons I love to teach is because of dealing with young people. And what I'm just so excited about is we've got a whole generation of young people in medicine who actually love computers and love information systems and are not afraid to use those systems in order to take care of patients. And I think that as some of our older colleagues don't make the adjustment, I think younger people are going to help us do that. So we think there's a lot of opportunity.

It doesn't matter whether you're in the same location or it doesn't matter whether you're rural or urban. Information systems today allow you to do a lot of the things that you couldn't do 15 years ago or 20 years ago, and I think we have not taken full advantage of it.

As far as blood banking is concerned, clearly there are areas in which quality improvement has been a model, and I think the blood system is one of them. I think another is anesthesia. I think the anesthesiologists have probably done a better job in standardizing equipment and standardizing--and yet you still have problems. You still have errors.

Some of you may be aware of the famous case in Florida in which a young boy died as a consequence of a routine procedure in an operating room, which all depended on the way in which the medications were presented to the anesthesiologist. It took a substantial amount of investigation to find out what was wrong. But when it was found out, a simple change in the dilutions and in the way the medications were presented make that error unlikely to occur anywhere else. And what's most important about it, they published--they publicized the results so other anesthesiology units could prevent--would not make the same mistake.

Again, we have a system which is compartmentalized. Every hospital has its own three computer systems. When a problem is fixed in one place, nobody else knows about it because nobody wants to talk about it. And I think we've got to stop that. We've got to change it.

MS. LIPTON: Dr. Shine, one of the things I was focusing on in your key findings was you said preventing errors really requires designing safer systems, and from our experience, I think, in blood books and transfusion services, there's an equal--it's equally important to focus both on the effective implementation of those systems and maintenance. Because you can put a system in place and if you don't--you know, if you're not willing to commit time and resources to maintaining it, it really does have a way of just disintegrating.

Did your committee focus at all on that, or were you focused purely on the places where there was a lack of a system?

DR. SHINE: It's CQI. It's just not QI. We obviously believe in continuous quality improvement, and that means regularly looking at things--the pace of change in medicine, as you know, is extraordinary, and no single approach, no single system can stay static given the changes that take place.

What I think the committee was trying to say--and obviously the slides just pick up a few of the ideas. What the committee was trying to say is when you think about care, start by trying to design a system so that it's likely to produce the kind of quality that you want as opposed to our historical approach. And I'm as guilty as anybody. I mean, I've built coronary care units and run cardiology units and recruited surgeons and done all the things that cardiologists do. But when I was in Los Angeles, I was amazed in the early '70s that there was a very prominent surgeon in town who seemed to always get much better results than anybody else in town. He's a world-famous surgeon. And I didn't think he was technically that much better than some of the other people that I knew.

So I went to visit him, and it turns out he had an extraordinarily well-organized way of taking care of the patients through surgery. In fact, he had a different room for each day post-op so that patients actually graduated in terms of care. And occasionally a patient would get hung up in a particular room, but, by and large, they graduated. And it was extraordinary to see at the end of that time on the fifth or sixth day people walking around, whereas in other institutions people were just barely getting extubated. This is in the '70s.

So thinking about designing the system up front in that way is something that my colleagues and I often don't do. Having said that, I agree entirely with you. It's a continuous quality improvement process. And I think we will try to--one other element of it that I have to be clear on--and you're going to hear from Mr. Francis about the airlines industry. We have a great deal to learn from industry, from other industries in this regard. But I don't know what system of quality improvement works best in health. And one of the things we're interested in doing is setting up some studies to look at what kinds of quality improvement systems actually work the best. Because it is not necessarily the case that what Ford did for automobile quality or what the airline industry did for airlines will in any detail be applicable in health.

The principles may or may not be the same, and I think we can learn a lot from industry. But I also believe we need to study the quality improvement process and try to identify which of those are the most likely to work around the country.

DR. SATCHER: Ken, will you say just a word about the kind of response you observed from professional organizations? I know we're talking about interdisciplinary--

DR. SHINE: I'm encouraged very much by that. I think that--first of all, I was very pleased that the Association of American Hospital Plans in writing thanked us for the report, said they were going to try to support the recommendations of that report. The American Hospital Association has announced an initiative in this area in terms of its members.

The American Medical Association, as you know, has created a Patient Safety Foundation, and we've had conversations with them. They plan meeting perhaps in the spring in which they want to bring the various professional organizations and other interested parties together with regard to how to respond to the problem. And although we don't cosponsor activities with any of these players, we will obviously have members of the committee there to explain what the committee was trying to say.

So my sense is the flip side of it is there is obviously an undercurrent of great anxiety about the litigation, about the liability issue, and we're going to have to face that. The liability issue, on the one hand, it is clear that people who are harmed as a consequence of negligence need to have some mechanism to get compensation. Mr. Francis will talk more about his notions about how that would work in terms of systems or institutions of care.

But we have to get--if we continue this notion that every error is a criminal act and that somebody needs to either pay for it or go to jail, then we create a culture in which you're not going to improve things. And the question is: How do you change that culture?

I believe the American people are sophisticated enough to know that you cannot have perfection, and yet the legal system in many cases demands perfection. And somehow we're going to have to cope with that problem.

DR. GUERRA: Dr. Shine, thank you very much for such an informative presentation. I'm Fernando Guerra, and I direct the Public Health Department in Texas.

DR. SHINE: In San Antonio.

DR. GUERRA: Yes, sir. A few questions.

One, did your committee deal with the other side of that, the errors of omission, where a lot of time, you know, it essentially is a judgment call? On the front lines of clinical medicine, when you're dealing with a critical set of circumstances, sometimes decisions are made or not made that ultimately affect the outcome. That's one question.

The other that I think is very relevant to certainly the work that goes on in public health, because that is sort of the place where we receive reports, whether it's communicable diseases or some critical alert that potentially puts people or communities at risk. I don't know that the capacity currently exists for trying to establish a valid reporting system for the kind of incidence that you have made reference to, is one concern. And I think that especially when you look at, I guess, a system that, you know, is still not so well established in terms of the information technology and the exchange of information just because of lack of resources, that even though I think that there is a major effort, which I'm sure Dr. Satcher would be willing to comment about, to try to better establish that through the Health Alert Network System, the national--to better be prepared for instances of bioterrorism or whatever might be a potentially serious threat. But it is not yet in place to try to do this to see if there are some clusters, to see if, in fact, there are some particular hospitals or physicians or clinics that perhaps have higher incidence rates of some of these adverse events.

DR. SHINE: Two very good questions. I think in the case of the first, yes, the committee does make some reference to differentiating errors, problems of either omission or errors of judgment, which are in a different category in terms of the kinds of errors that we're talking about.

At the same time, we anticipate in the course of the--in the JAMA article in 1997, and in some of our subsequent reports, we'll be talking about overuse, underuse, and some of the issues related to how you identify when underuse is real or not. So I think your point is well taken. It's not an easy subject, and I think it will require a substantial amount of thought.

With regard to the capacity, yes, there may be variability, and there are a variety of strategies that one might undertake. I frankly have difficulty in seeing that the great State of Texas would not be in a position to find some way to deal with a reporting system, at least at the state level as opposed to at a municipal level.

I would have to remind you that--just for your own information, I chair the New York State Cardiac Advisory Committee, which is--New York State has mandatory reporting of the outcome of cardiac surgery, of angioplasty, of pediatric cardiac surgery, and we're about to report valve surgery, and we're going to go on in the next couple of years to acute myocardial infarction.

That mandatory reporting has done exactly what you described, and, in fact, one of the things that's most impressive about that mandatory reporting is that institutions that no one would have believed had high levels of problems had problems; that the highest mortality in pediatric cardiac surgery in the State of New York--this is public knowledge--was a well-known teaching hospital, which stopped doing pediatric surgery in infants and young children.

Data makes a lot of difference, and without the data, you can't tell what quality is. But with all the problems with regard to resources, Fernando, I mean, I believe in Bobby Orr's famous comment--no, I guess it was Wayne Gretzky, who was asked about how he was so effective, and he said, "Because I always try to skate to where the puck will be."

What I'm trying to say is that, in terms of the systems we are talking about, we ought to be skating to where we want to be, and we ought to try to--there is going to be money for infrastructure of public health. It happens that some of it is going to come through bioterrorism concerns. The institute is going to be undertaking a study this fall on--actually, I hope this summer--on public health and its infrastructure, and we're hoping that some of the things that we'll address in that area will help you do what you need to do in addressing these issues.

But we've still got to go where the puck's going to be.

DR. AuBUCHON: Dr. Satcher, would you like to give us an update as to the administration's response to this report? I understand that there is an effort underway to gather some information and lay out a plan.

DR. SATCHER: I don't know [inaudible - off microphone]. That was intentional.

[Laughter.]

DR. AuBUCHON: That was a judicious "No comment."

DR. SATCHER: No. I said no, I'm not--I don't think it would be good for me to try today to give you an update on where we are because we are discussing it among ourselves here.

DR. DAVEY: I think as we proceed through these two days that it might be just worth keeping in mind that when we look at the blood transfusion system in the country, there are really two arms: there's the blood center and there's the hospital. And I think, Jim, as you pointed out, especially in the blood center, we've been very attentive in the last few years to quality, CGMP, quality assurance, with, I think, some fair success.

But now I think the pressure has moved a bit more to the hospital. One of my colleagues at a major hospital pointed out to me that the quality of the fluid in the blood bank now is pretty good, but the price has gone up to some extent, and he's been pressured to reduce staff. And he's seeing more errors now in the transfusion side in the hospital.

So I think it will be incumbent on us to see if we can bridge that gap between the blood center, where I think we're in pretty good shape, and the hospital, where perhaps now the focus needs to be on transfusion.

DR. SHINE: One of the reasons that we're involving purchasers, particularly corporate purchasers, in these discussions is they have to understand that there's something called value. It's not only how much you spend, but it's also what's the quality of the product you get. And we're looking that when these kinds of issues come up where you have to do cost cutting that we begin to develop some evidence as to what that does to quality. Because I think most purchasers don't want their employees--and I mention purchasers, corporate purchasers, because they do represent the 800-pound gorilla in this area. I think most of them want the right things for their employees in spite of all of the bad press they get at times, and that if, in fact, one can make a connection between cost and quality, they're much more willing to back off. But you've got to have the data, and our problem is we don't have the data. We don't present the arguments and we can't show that if you do this, this is what's going to happen, or that if you do this, this did happen. And we've got to get to that point.

DR. HOOTS: I think it's very impressive what's been accomplished today, but in reading the chapter particularly on the Center for Patient Safety, it wasn't completely clear to me whether the big aim was more of a data clearinghouse with an advisory group to process the information or to go perhaps farther and more globally in terms of analogous to what NTSB and FAA do for aviation in terms of both regulation and investigation.

I didn't sense that that was part of it, but that left me then wondering. Is the overall recommendation to proceed along avenues of medical care in parallel and then expect the convergences to occur as they need to when they cross function? For instance, here we're talking about blood bank and blood safety, and in your case, you'd be talking about, say, as a treating physician, cardiologic or cardiovascular safety, and they would go along in parallel, and at some point--the data would, of course, arrive at a central place, but where they needed to have interaction would be decided by the parallel system.

So I'm not clear which one was the overall recommendation of the IOM.

DR. SHINE: I think that--and I want to be careful here because the nuances here, I would much prefer to have someone from the committee telling you what they were thinking. My understanding was that while they thought that it would be very useful for a national center to be able to pick up information at the state level, they did not want to see a national mandatory reporting system, for a variety of reasons. They did not think that was going to be the most constructive way to go.

The question of the convergence of the data, I can't answer that. I don't really--I'm not sure that I can reflect their thinking with regard to that particular issue. It's a very important issue because one of the things we've learned is that if you look at clinical care in hospitals, a hospital can be spectacular in taking care of breast cancer and abysmal in taking care of hip fractures. There is no--you cannot look at a hospital and by evaluating one or two or three of its services tell you anything about the quality of the services in other areas. And for that reason, you have to be able to get the data to come together, but I can't answer what the role of the center would be from that point of view.

Incidentally, there has been some discussion, I understand, about making it a Center for Quality and Errors, as opposed to a separate Center for Errors. I don't think the committee would object to that at all. I think what they're saying is there need to be new resources. You can't do it with the same resources. And, B, it needs to be identified as something that--and I think this goes back to some of the earlier questions, that on the one hand, you want the error issue to be part of the quality initiative; on the other hand, one of the ways you justify the quality initiative is by emphasizing errors. People respond--we've had workshops with the press over why does the press not care about quality. They care about cost. They care about access. They care about referrals. But they've written very, very little about the quality of health care. And people, when you talk about quality, their eyes glaze over: What does that mean? Errors they understand.

So the idea of connecting these two is not at all a bad idea, but I can't answer your specific question in that regard.

DR. GILCHER: Ron Gilcher, the director of the Oklahoma Blood Institute. You addressed an issue that I think is very critical, and that is the medical-legal issue. And I think there's a part of this--and I wrote some notes down, and I'd like you to comment on this. And I said undesired outcomes or unexpected outcomes are not necessarily an error or an adverse event. However, there is clearly an attempt on the part of the legal arena to interpret an undesired or unexpected outcome to make it an error or an adverse event. And, clearly, that has affected us in the transfusion medicine sector.

For example, before testing for a certain virus was put in place, specifically hepatitis C I'll use as an example, and then to be sued subsequently because a patient developed hepatitis C even before testing was in place, and there's an attempt to interpret that from the legal arena as an error or adverse event, as opposed to an undesired outcome.

DR. SHINE: I agree with you. I think probably in addition to your own experience, I would cite that you can now go to major portions of rural California, Colorado, and a number of other states where you can't find a local physician who will deliver a baby because a damaged baby is considered to be the cause of a suit, regardless of the reasons why that outcome was bad.

This is a major public policy issue. There is a substantial amount of interest in the Congress about doing something about it. What's going to happen, I can't tell you.

I think one of the issues that we're trying to emphasize in terms of the errors report is the notion that you have to make these kinds of distinctions, and if you don't--I've said this to a group of lawyers--the price that our society will pay for the litigation is a lot more people are going to die, because to the extent that the litigation prevents us from improving quality and taking care of errors, the errors are going to continue again and again and people are going to die. And the question is: Is that a defensible position for the country to take?

But, look, you're talking about a lot of money, you're talking about a lot of politics, and I can't tell you how that's going to come out.

DR. GOMPERTS: Dr. Shine, I'm Ed Gomperts. Thank you for being with us this afternoon. The leadership that the Institute of Medicine is playing in focusing on this issue is obviously of great importance to health care today.

My question to you is: Where is data currently--how is it being generated today? Where is it coming from? Where are the gaps? How did the Institute of Medicine gather the information on which to make these recommendations?

DR. SHINE: A serious problem. Most of the available data is based on individual studies conducted in particular institutions, either directly because one was trying to identify errors or because one was putting in a before-and-after system to minimize errors. So it is not in general the result of comprehensive reporting. That's one of the reasons why make reporting such a major issue. It's a result of a series of investigations carried out by a variety of investigators in a variety of settings.

What's important about it, again, as I tried to stress in my comments, is that it didn't matter what the setting was. The same kinds of problems were seen in a study in major Boston teaching hospitals as were seen in non-academic health centers. So we have some reason to believe that those studies are generalizable, but without reporting systems they're still samples and they're not--it can't be assumed.

Mr. Chairman, I'm afraid that I've got to be--I turn into a pumpkin in about two minutes.

DR. AuBUCHON: Well, we greatly appreciate your time this afternoon to share this important report with us, and we congratulate the IOM on focusing our attention on it. Thank you very much for coming.

DR. SHINE: Well, thank you, and good luck with your deliberations.

DR. AuBUCHON: Thank you.

Many of us boarded an airplane to come to this meeting, or tried to board an airplane to come to this meeting, and I think for those of us who travel even frequently by air, boarding the plane always brings at least a moment's thought as to whether or not the plane is going to reach its destination safely. The aviation system in this country has an excellent track record and one that appears to continually be improving. But that does not come by chance, obviously. It comes after a lot of effort.

So I'm very happy that our next speaker, Robert Francis, who is the immediate past chairman of the National Transportation Safety Board, is here to talk about aviation safety and human error to possibly show us one way that the issue of error can be addressed successfully.

MR. FRANCIS: Thanks very much for having me here. I feel a little bit--maybe not like a pumpkin, but there is a certain element that says, What's it like to be asked to pinch-hit for Ted Williams? As everyone has mentioned, Bob Helmreich couldn't be here with us today, and he really along with Ron and some others have been the major, major players in what's happened in aviation. So I make no pretense at all to be able to give Bob Helmreich's presentation. He sent me all the slides. I've been temporarily in an office about two blocks from here. Yesterday afternoon there was a fire in that building, so I'm here with about three of Bob's slides to help me out. So I will be, as I often am, winging it a little bit. But I think that hopefully there will be some value.

I was coming back on a plane on Monday and reading the New York Times, and I read the article that I'm sure most of you have seen. And I was struck, I guess, by the negative tone that I saw there. And this is not unrelated to what Dr. Klein has just been saying.

I think that you in the health care industry, as well as those of us that are in other safety areas, are going to have to get moving here. And I think everybody's going to have to acknowledge and accept that moving means you're not going to be doing business in the same way you've been doing business and with which you are fairly comfortable as human beings.

I'm saying this to you not as an outsider--well, I am saying it to you as an outsider. I will say to you, however, that I made almost the same remarks to a group of aviation folks a couple of years ago. I told them to stop whining and get moving. And I think that these arguments about litigation and all of the rest of it are very handy for those people who don't want to move. But I would also say to you--and I'm not a lawyer but I have a very smart one who worked for me. I would also say to you that as these kinds of systems are being discussed, you're going to reach a point--and this is a point that we're getting very close to in aviation, may already be there. There are a lot of smart lawyers out there, and if they understand the availability of these kinds of systems to deal with error management and you have not instituted something like this, you are going to have yourself a liability problem.

Let me talk a little bit about a couple of Bob's slides here. I spoke--this is not Bob; he was actually there--at a Royal Aeronautical Society Human Factors Conference a couple of years ago in London. And it was--we were talking about air. I don't think Ron was there, although he may have been. And somebody got up and said: Error is the downside of having a brain. And I thought, How wonderful. And there we have it. We all know that human beings make errors, and so the challenge becomes how do we manage these errors in whatever environment we're in to make the system that we're working with safer.

Bob says human error is ubiquitous and inevitable. Errors increase the probability of incidents and accidents. And error management strategies can reduce the severity of errors. And I think that applies wherever we are talking, and it's a very, very basic sort of thing about dealing with human beings and human activity and particularly in the safety realm.

NASA has done, as you probably know, a lot of research in aviation safety as well as in the space arena, and they find, and we find in NTSB, or NTSB finds--I used to when I was there--that 70 to 80 percent of incidents and accidents are attributable in one way or another to human error. And I want you to listen to this next, the sources of errors: leadership, communication, monitoring and challenging what others are doing, conflict resolution, and decisionmaking. Now, there ain't nothing unique about that to aviation, I would argue.

Bob says medicine is different but some things are common, and he says safety as superordinate goal. I'm not sure I know what that means, but I think it's important.

Cost cannot be ignored, and there was some reference to that earlier, and certainly, you know, in addition to talking about error management, we're also talking about risk management. And there is risk involved in everything we do, and there's obviously a necessity to decide where you're going to spend your resources.

I did a lot of work on TWA, as some of you may know, and the issue of the allocation of the safety dollar is enormously important. And if we're not careful, we can end up with misallocation of lots of dollars. So, again, something that has to be managed and has to be watched.

Threats from multiple sources, second-guessing after adverse events--boy, do we know about that in aviation.

Now, the second-guessing in aviation tends to be immediate. If it's EgyptAir, if it's TWA, if it's ValuJet, if it's American Airlines in Colombia, there are immediately 17 expert commentators on 17 24-hour news stations that tell us exactly what happened and why it happened. And the rush to judgment is a marvel to watch.

You have something of a luxury, I think, in that you're not faced with that. There is absolutely no question but that Dr. Shine and the academy's report is a wake-up call for you. And I think that--I have read it, and, you know, I'm impressed. But I think that some of the comments that one hears in reaction to that are unfortunately not particularly positive. And I don't mean to say that everything in there has to be adopted or incorporated. But there's lots of thoughtful stuff. And the public has now got this on its mind, and I don't think it's going to lose track of it. And if you folks in the field of medicine are not starting to move on it, I think it's going to be at your own peril and not just peril in terms of your liability.

You know, one of the things that--let me talk for a second about mandatory and voluntary reporting. I would--and, again, we have the same parallel in aviation that you have. There are requirements in aviation for mandatory reporting of defined incidents and accidents. This obviously allows us to deal with the after-the-event search, to make changes, to avoid having that incident or accident happen again. It's important. The NTSB is important and accident investigation is important. Is it the way we should ultimately be doing business? I would argue to you no. It's the least productive way to go up and be picking pieces of aircraft out of South Dakota or out of the Florida Everglades, is not a good way, for obvious reasons, to be doing business.

And so the reporting system in that mandatory sense of incidents and accidents is important. These are things that have happened and you have to deal with that. But I would argue to you that what we in the aviation safety business, transportation safety, and a lot of other safety, and you should be thinking about is systems that are going to allow you ultimately to preempt those incidents and accidents. That's what we want, is preemption, and that's what I think is the strongest argument for voluntary reporting systems.

Let me just finish with mandatory. I'm not at all sure that a mandatory reporting system exposes you to any more liability to the sharks than you have otherwise. If you have a serious incident, a serious injury, or a death, that is not going to be an invisible event. And I don't think that your liability is any higher by having a mandatory reporting system for that that allows you to look more quickly and more systematically at what's going on in your system than to have it without a mandatory system.

Let me talk about voluntary reporting, and I am an enormous proponent of voluntary reporting systems. it's possible to have voluntary, confidential reporting systems, and American Airlines has a wonderful one. You don't develop these things overnight. All the same kinds of arguments and--the arguments are the same in aviation or what we're hearing here or what we saw in the New York Times article on Monday.

American Airlines has spent ten years working on this, and they've worked out a system within their company, with the regulatory authority, with the FAA, with their employee groups, that encourages voluntary reporting of error in their systems, whether it's by pilots or now they're getting into mechanics. It is enormously valuable.

And let me just cite you, most of their reports that they get from the pilots include either altitude busts, going to the wrong altitude in the airplane, or heading, so they're air traffic control kinds of errors.

But when you sit in one of their meetings and you look at how the dynamic of this meeting, how constructive it is, the individual pilot is not discussed. His name is not known to everybody in this room. It's known to a very few people. But say you're talking about altitude busts, and you're getting a pattern of pilot self-reports that say, you know, I busted my altitude, you know, by 400 feet, or I was 300 feet below my assigned altitude, or whatever it is. And then you start talking to the pilot and saying it enables you to find out why that happened. So maybe over the course of six months American has a whole slew of these, and they're finding out from the pilots, gee, I missed the controller calling me on this one or I missed leveling off the airplane because a flight attendant came in the door at the time that I was engaged in going to that assigned altitude.

Well, it may very well be that if there's enough of that, what you want to do is, when there's an altitude change going on, you want to prohibit anybody coming into the cockpit of the airplane until they're established at the new altitude. And I think this is--you know, it's not dramatic. It's not something that's caused a lot of accidents or prevented a lot. But it may. It may one day. Can we document that? I don't know. There's just logic that says that you don't want airplanes flying around at the wrong altitudes.

So that kind of a system, is it easy to work out? Is it easy to get American Airlines to say, gee, this guy screwed up and I'm not going to rap him or fire him, as the case may be? Is it easy to get the FAA to accept that they are ceding some of their enforcement responsibility for what they correctly have perceived to be the greater good of being able to look at error that's taking place in the system?

So I would say to you enormously, enormously powerful tools, and I wouldn't see why--sure, it's not the same in medicine as it is in airplanes. You don't have American Airlines and you don't have an associated pilots union, et cetera, et cetera. There are differences. But I cannot believe that it cannot be done. I can't believe that the State of Texas couldn't engineer--I mean, you've got Bob Helmreich sitting there in Austin that knows as much about it as anybody in the world, and he's been doing a lot of work in medicine. These kinds of things can be done.

What's going to happen if you don't do it? There are four Congressmen who have got hearings and probably 50 that have got bills or have cosponsored bills. I'll tell you, in aviation at least, the worst form of regulation is regulation by statute. It ain't a good way of doing business, and I'll give you an example from aviation.

Collision avoidance systems in airplanes, a wonderful device. The FAA dragged its heels, dragged its heels. The industry opposed it. It was going to cost too much, blah, blah, blah. It hadn't been sophisticated enough. There wasn't enough known. The engineering was difficult, et cetera, et cetera.

Finally, there was a midair accident, and the Congress passed a law telling the FAA to mandate collision avoidance systems, which they did. And the problem with it was, whether as a result of good lobbying or as a result of inattention on the part of Hill staffers, or whatever it was, the statute was written in a way that it excluded all major cargo aircraft in this country. Now, that's 800 big airplanes. So we to this day have--not 800 because some of them have started to voluntarily comply. But there is not to this day the requirement that FedEx, UPS, Airborne, have collision avoidance systems in their airplanes.

So don't make them do it by statute. Let the Congress give you some guidelines, maybe some definitions of terms and stuff. But you're the ones that know how to make it work and how it should work.

You know, I think you have an advantage in that you start with a clean slate, in a sense. You got your wake-up call. But you've got the advantage that there's not going to be another TWA or EgyptAir three months or six months from now which is going to lead to another CNN, et cetera, et cetera, -led rush to judgment here. So take advantage of the opportunity that you've got and the relative luxury compared to some others in terms of being able to do something.

I think, you know, there are going to be different elements of the industry that are saying--are enthusiastic and less enthusiastic and opposed, and everybody's going to have to give up something. Again, this is the same thing I was saying two years out in San Diego to the aviation folks. When we decide to accept voluntary reporting systems and the confidentiality and a kind of no-fault--not immunity, by the way. The word "immunity" was being used in that New York Times article, and I don't know--and it was actually being used in a recent aviation thing that the White House was talking about, and I'm not sure those White House people totally understood.

Voluntary reporting systems do not give you immunity. There is some protection, but if you do something either intentionally, violate intentionally or you violate in a manner that shows that you are not certifiable as a pilot, that you shouldn't be certified as a pilot, that your credentials and your performance are such that you shouldn't be flying the airplane, then you're not immune. You are immune if it's an inadvertent act, et cetera, et cetera.

So the word "immunity" is one that you will never hear in American Airlines or you will never hear either the pilots unions--and they understand this. The pilots union, they're major supporters of these kinds of programs, so they're not a threat to employees if they're properly done.

So I guess I would conclude there and just say get started because, if you don't, somebody's going to get started for you, I think. Is it easy to do? No. Is it more complex or less complex than what we've been through in aviation? I don't know. Maybe more complex. Can it be done? It's got to be doable. It doesn't have to be done quickly, I don't think. I mean, it seems to me that Ed Salviday, who's the vice president of United Airlines and one of the leaders of safety in this country, talks about in these kinds of programs crawl-walk-run, and I think that applies. But I think you better get started crawling, and you'll get some good results by so doing.

DR. AuBUCHON: Thank you very much.

You exhorted the airline industry several years ago, and you are exhorting medicine at this point to stop whining and get moving.

It is apparent that the airline industry is moving. What got them off the dime? What got them to pay attention to this and institute the kinds of programs that you are mentioning as being effective, and what can we learn from that to possibly awaken those areas of medicine that are not moving at this point or not interested in moving?

MR. FRANCIS: I think it is an educated process, and I think obviously aviation, given the American example, had been thinking about this and people had been working on it for a long time, but really I think people have to understand the advantages that can come out of it, and I am talking about voluntary reporting systems. They have got to understand the advantages that can come out, and it has got to be explained to them that it is not at their peril that it is happening.

If the regulatory authority has to give up a little bit of its ability to enforce, that is a sacrifice that has got to be made. Everybody is going to have to give up a little something. It is not going to be perfect for anybody that is coming into this partnership or cooperative effort.

MS. LIPTON: Mr. Francis, I was wondering, we keep talking about mandatory as having some sort of punitive action attached to it that you would go after someone on the basis of what they filed, but couldn't you get to where you want to go even in terms of voluntary reporting faster if you made the requirement to report being mandatory, not your required participation?

The reason I am wondering if people considered that is because we keep arguing about resources. Just to give you an example, we as an association set standards, but we are voluntary. We require organizations that are members of ours to adopt standards that really do force a lot of what you are talking about.

We have had a number of hospitals drop out. They drop out because they say we cannot afford to do this, it is too complicated, and so our experience, if you will, in trying to push this as the voluntary right thing to do, it has not been entirely successful.

Now, maybe it is because we have not had enough media attention, but is there a disadvantage to making it a mandatory reporting requirement? Because then hospitals will have to allocate resources to the process of analyzing errors, reporting those things, and you don't get into an argument or a tug of war about where those dollars go in the hospital.

MR. FRANCIS: I think there is a cultural issue here, and you are talking about "an hospital," will they participate or won't they participate. I think there is going to be, as you say, some pressure on them to go ahead and participate.

I am a little hesitant to be supportive of mandatory. I think that maybe some day that will be the case, but I think what you are finding in the aviation industry, it is not mandatory that any airline have this system and that voluntarily they are going into it when they feel comfortable doing it.

Lots of time, it's an issue of the culture, and I think once you start talking about mandatory, what you risk doing is creating an adverse political reaction in the totality of the culture of the industry that may deprive you of the ability to be working with those folks who are interested in going forward with it.

MS. LIPTON: Could you build in some kind of financial incentive to participate?

MR. FRANCIS: You can build in whatever you are creative enough to get accepted. Absolutely. I don't think there are any constraints, and that is the wonderful thing about it is that you can really adapt it to the medical environment. You can take whatever we have to say and take a little chunk here and a little slice there and then the rest of it may or may not be interesting, but I think the important thing is we are dealing with human beings and human error and human reactions, human defense mechanisms and egos and everything else. That really does not change from one sector to another.

MS. LIPTON: So you really are suggesting, though, that with a mandatory system comes that, "Oh, no, we are not going to get involved. It didn't come from us," and you already have a cultural barrier to even changing a culture.

MR. FRANCIS: I think that is the problem

The system really is based. The institution has to be participative, but ultimately you are talking about the nurse, the surgeon, or whoever it is being willing to make the report.

DR. AuBUCHON: Dr. Chamberland?

DR. CHAMBERLAND: May Chamberland from the Centers for Disease Control.

I wanted to ask you a couple more questions about the voluntary reporting system. I was curious, first of all, that you mentioned specifically American Airlines. Have other airlines chosen to adopt this system, and if so, have there been any attempts within the industry to try to standardize it so that reports could be aggregated or analyzed by a central body such as FAA?

MR. FRANCIS: The answer to that is FAA has finally, finally, after years of those of us beating them, come out with an advisory circular, and there is going to be a standard.

One of the things, quite frankly--and you are going to get into this as well--is the Department of Justice was a problem on these kinds of systems saying, "Well, if we do this in aviation, then we are going to have to do it in EPA," and blah, blah, blah, and is that the same. So a lot of people have to be educated, but, yes, there are a number of other airlines now that are going forward. There is another one in Texas, actually.

DR. CHAMBERLAND: Let me ask you one other objective question. The self-reports, a pilot talking about an altitude error that was his responsibility, do reports also come into the system from other persons reporting an error or a near-miss that may not necessarily have been their responsibility, they observed their copilot or someone else?

MR. FRANCIS: Can be.

DR. CHAMBERLAND: Can be.

MR. FRANCIS: Yes.

DR. CHAMBERLAND: My last question is because you made the point that voluntary reporting does not equate with immunity, do you think that the self-reports to these voluntary reporting system are in any way biased that the less serious near-misses tend to be reported more than something that might be viewed as more serious and potentially subject to some sort of a disciplinary action?

MR. FRANCIS: I do not think so because I think when you get into this area of intentional or basically lack of competence, then you are going to end up with some kind of another way of finding out about that through the air traffic control system or a passenger saying something or whatever it is or another crew member.

Let me just mention one thing, and again, Texas is doing well with me here, at least today. In the Dallas-Fort Worth air traffic control facilities, there is a controller self-reporting system that is being done by the FAA--I worked for the FAA for 20 years, so I get a little confused--internally within that. That really was a function of enlightened leadership within a facility of the FAA system.

DR. AuBUCHON: We will not take the continual references to the State of Texas as any political statement in this primary season.

[Laughter.]

MR. FRANCIS: I am lobbying for them to get started.

DR. AuBUCHON: Dr. Davey?

DR. DAVEY: Just one quick point and follow-up on what Mary and others have asked. Obviously, the American Airlines system has worked because of a real confidence in confidentiality. How do they ensure that so that people have confidence that their information is not going to be--

MR. FRANCIS: By not dropping the egg because, once the egg is dropped, you are screwed, and I am glad you mentioned that.

This kind of a system is based on confidence and trust, and once you bust that, you have really had it. So, if the FAA goes and takes enforcement action against one of these guys or American gives somebody a week off without pay, your program is dead.

DR. AuBUCHON: Dr. Epstein?

DR. EPSTEIN: You have spoken a lot about the need for reporting. Where we stand in transfusion medicine is that we already have a mandatory system in large measure, although it is being expanded, requiring reporting.

I think a lot of our concern actually is focussed on improving that system, particularly in its analytical dimension; in other words, doing something with the information.

I was also very impressed in some of the literature about looking at safety in aviation medicine, at the focus that came about on the human dimension and the need not to overlook human interactions as part of design of systems as well as in the analysis of error-related data.

So I wondered if I could get you to comment at all about the way one might optimally design reporting systems so that they lend themselves to better analysis and so that they lend themselves to capturing the human dimension, assuming for the moment that we have adequate data to deal with it. It is a question of doing something with it.

MR. FRANCIS: I am not going to touch that. There are people that know a lot more about that than I do, but I will put you in touch with Captain Scott Griffith who runs the program for American Airlines, and he knows a lot.

DR. GOMPERTS: I have a couple of questions. You mentioned that it took American Airlines 10 years to develop the system currently in place. Why did it take them 10 years? What were the major hurdles?

My second question is my assumption is it is not a perfect system. Are they continually working on this particular system?

MR. FRANCIS: They are absolutely continuing to work on it, and the entirety of the industry is now moving that way. So you are going to end up with comparisons, and hopefully, at some point, you are going to end up with some kind of either data or information-sharing between airlines.

In aviation, you have another very, very powerful tool which can be used in conjunction with self-reporting systems, and that is very sophisticated, recorders in the airplane now, which basically tell you what is happening. It is partially an answer to her question as to how you find out what is going on. The recorder will tell you what is going on. What the reporting system tells you is why did that happen.

To be very candid with you, the reason it took 10 years is the kinds of attitudes that you are all familiar with. It is doing business the way we have always done it. We are very comfortable doing business the way we have always done it, and the naysayers are always there, whether it is the liability argument or resources or blah, blah, blah.

DR. AuBUCHON: Mr. Allen?

MR. ALLEN: Mr. Francis, thank you, once again, for being here.

One of my main concerns, even as I listen to the comparisons between the aviation industry and our particular topic here, is the difference between the mandatory system and a voluntary system as it is set up in the report we heard earlier, and the notion of a near-miss or a minor injury in my eyes is a concern as a consumer point of view.

When a consumer goes into a hospital for a certain procedure and there is some type of a problem, whether you call it a near-miss or a minor injury, that in no way, shape, or form for that consumer is minor or near-miss in their perception of things.

So my concern--and it is not so much maybe as a comment for you--I know there have been certain consumer issues that have revolved around accidents in aviation, but my concern is how this is going to affect the consumer's perception of why we are doing certain tracking. Is this tracking being done? A lot of consumers will look at it as this tracking is being done mostly for liability concerns versus safety concerns for the patient. That is an issue that I think we need to discuss.

Maybe at some point, Dr. Satcher could get back to us on any information they may have on these kind of problems.

Also, a patient when they have a near-miss, as we are calling it now under this voluntary structure, they have a right--

MR. FRANCIS: Interesting terminology. I had wondered where you got that.

MR. ALLEN: Yes.

They should have a right to demand an explanation as to what went wrong and also a solution so that this doesn't occur again. That is one of the things that I think we need to address in terms of a voluntary system, are these things going to happen.

One of the things that was mentioned earlier is that the data makes a difference. I am concerned if our doing this arbitrarily, who is going to report, whether or not they feel it was serious enough to report. In one person's eyes, it may not seem relevant; in another, it does. So I think there are some issues that we need to deal with.

MR. FRANCIS: I think you are absolutely right. There has got to be a universality of standard, and I think, quite frankly, that is the kind of thing that might lend itself to congressional leadership in legislation so that the State of Illinois and the State of Massachusetts are singing to the same music. Even if you end up with--and I don't know enough about your industry, whether it is statewide or however it works, but there is going to be the same kind, if you get these systems working, of hopefully interest and then starting to share at least information, if not data, between Illinois and Massachusetts. If they are not compatible, then obviously you have got a problem.

MR. ALLEN: Would you possibly agree that in an area where there has been adverse problems in the past, such as blood safety, that we need to sort of look more towards the side of making sure the consumer is assured that we are looking out for their best interest versus looking out for any liability issues? I think that is an issue for consumers as they listen to any problem in this country, especially safety along the blood industry. We should learn from the past, and we should make sure that we do things so we don't repeat it.

MR. FRANCIS: See, I do not see these as mutually exclusive, the safety and the liability issues and the good of the consumer, and I think the American public are pretty sophisticated. If you can articulate that to them, I think you are going to be okay.

DR. AuBUCHON: Dr. Kuhn?

DR. KUHN: Mr. Francis, I was very curious in the voluntary system that American Airlines has adopted. Do you have any examples whereby the voluntary system has helped to absolve punitive issues, number one, and also help to absolve any liability issues? I was wondering if you had any kind of specific examples that you could help us gain more clarity as to how this voluntary reporting works.

MR. FRANCIS: I am not familiar enough with the specifics to be able to answer that. I think on the liability thing, probably that has not been a factor.

What was the first part of your question?

DR. KUHN: An example perhaps on how the voluntary reporting has helped to absolve a punitive issues that may have come up in the aviation.

MR. FRANCIS: I do not know for a fact. I would be surprised if that has not happened.

DR. AuBUCHON: One last question. Dr. Chamberland?

DR. CHAMBERLAND: Thanks.

One more question about the voluntary system.

DR. AuBUCHON: It is not going to be a total second round.

DR. CHAMBERLAND: Currently or are there plans to, now that additional airlines are being brought into this voluntary reporting system--I believe from my reading, the data are shared certainly within the industry; that there is feedback to the pilots, let's say, for example. But are there plans to publicly release, let's say aggregate, unidentified data from the reporting system? Airline A in the month of January had 500 pilot self-reports. 200 of these were altitude errors or misadventures, whatever. So I was curious, particularly in light of some of the comments that Larry Allen made, about just how much information from this voluntary system is actually made public.

MR. FRANCIS: The answer to your question is at this stage, I think, no.

Let me end by talking about--because there was a reference in this Times article to the competitive issues of regional hospitals, et cetera.

This issue of ranking, you have got to be pretty careful that when you start reporting that you are not providing a deterrent obviously for people to participate, there are wondrous Senators always who think it would be wonderful to rank all of the U.S. airlines by safety.

So you go and you take Airline X, and there is something called service difficulty reports. They report them, and they go into a big database. Maybe you learn that your auxiliary power unit is having a problem, and that helps you talk to Allied Signal or Honeywell or whatever it is now about this system.

If you look at the data and you do not know what it is talking about, the fact that Airline X has reported 10 times as many STCs as Airline Y, to me means that they are safer, but it is not going to be construed that way by Senator Twitchett.

Thanks very much.

DR. AuBUCHON: Thank you very much for your presentation. We appreciate it.

As you pointed out, clearly there are many factors that go into human error. It is a complex situation, and our next speaker has been there and has investigated it personally to help put a human face to this and help resolve the question of where some of these errors come from with the idea that we may be able to reduce their frequency.

Dr. Bosk will be speaking on his book, "Forgive and Remember," an historical perspective on human error in medicine.

DR. BOSK: I would like to thank the Commission for this opportunity to speak today, and before I formally begin, we need a small technology transfer. I was supposed to have a place where I could show a PowerPoint presentation. I think the computer has been brought here. I am not sure it has been hooked up. Is it here?

[Pause.]

DR. BOSK: Let me vamp for time for a few minutes during the setup here.

I have been thinking about medical errors since 1972, or at least that is when I started, and I think in 1979, I suggested, as Mr. Francis did today, that it was time for the medical profession to get on with it and stop whining. So I am gratified to be here with a little bit of a delay.

I would like to describe what were two signal events in the thinking about medical error. The first was really quite innocent. I was home at Christmas break in that time of year when graduate students have to come up with thesis topics, and I was visiting with a friend from, I would say, preschool, but it is probably safer to say high school. He was a medical student about to enter the wards.

His first comment to me was, "I am very scared," and I said, "Why?" He said, "I could make a mistake. It could kill somebody, and then tomorrow, I might face exactly the same situation. How would I know that I wouldn't make the same mistake again?"

I thought about that for a long time because it did not seem to me that I had the psychic strength to face that kind of situation. I did not yet know that it was not real likely as the first day on the wards that he would get that kind of responsibility, but I did spend quite a lot of time thinking about in fact how does one deal with just that problem of having to face again the situation that you just messed up on the moment before, and how do you do that with some kind of equipoise.

The second story is a little bit more humble. Once in a moment of failed consumer research, I bought a Fiat 128. I did actually do a lot of consumer research, but it came down to that that was the only car that you could buy for $1,400 and pick up in Europe and ship back for free. I was going to be in Europe, and I had exactly $1,400 and needed a car. So the research did not matter a lot.

The car got shipped back, and it worked beautifully for, oh, 20,000 miles or so, and then it stopped working beautifully. I took it to a place called the Hyde Park Foreign Car Hospital and Clinic where after extensive analysis I was met with a mechanica who said to me, "We have seen two others just like it. One person let their car die with dignity. The other tried to repair and not successfully, and after they spent more than the car was worth twice over, then they had to let it die with dignity. What would you like to do? You and your wife can think about it for as long as you want. If you have any questions, come back to us. We know these are difficult decisions. The car will be here if you would like to sit with it."

[Laughter.]

DR. BOSK: But this is all very odd. I began to think about the relationships of mechanics and clients and doctors and patients around vague complaints that trouble getting fixed.

That is not eventually what "forgive and remember" became, and I suppose that in family law, there would be a third reason. Family members would tell you that I began to think about mistakes, and that would be, as my wife puts it not so gently, "You have never actually been to admit any quickly," to which I would then say, "But that's not true," and then she would say, "See?"

So I have been thinking about medical mistakes for a long time and from a lot of different perspectives.

May I have the next slide, please?

I have four goals today. I want to let you see, hopefully, that medical error is not something about which systematic thought starts with the IOM study or the precursor studies that they cite.

I want to show that there is a link between the study of medical error and medical uncertainty. Too much of the discussion to my mind precedes from the assumption that there is always one good clinical choice that anybody faced with that decision tree would make that one good clinical choice, and there seems to me to be little recognition so far that medical error is risky, that it is uncertain, and that responses are variable. I want to come back to that because it is not a reason for not proceeding with greater attempts to control error, but it is a reason to be humble.

I would like to also demonstrate that there is a connection between the study of medical error and professional norms. I think ultimately, all of those that are involved in health care that agree to which they fulfill their public service obligations, it is determined by how they manage error, how they see error, and what they see as their obligation to reduce the cost and the suffering of error.

Then I want to discuss the needs for and the barriers to a culture of safety in medicine. It is quite easy and no doubt truthful to say that we need a culture of safety because we do, but we also need to be quite clear about the long haul that will be needed to develop that culture of safety.

Those of us that are students of culture recognize that cultures change slowly and that there is a great deal of lag between our intention to change that culture and the actual change in the culture.

Next, please.

To start with a tradition of at least the sociological study of errors, I want to run through very quickly what I think of as four major moments. There are others, and I apologize to my colleagues who are not here whose work I have ignored, but as Popadis [ph] once said, art is long, but life is short, and it is even shorter when you are presenting before a committee.

Those four moments are: "Mistake at Work," an article published in 1951; Elliot Friedson's paper, "Processes of Control in a Company of Equals," which is expanded in his 1976 book, "Doctoring Together"; Don Light's "Psychiatry and Suicide," which is expanded in his "Becoming Psychiatrists"; and my own "Forgive and Remember."

Next slide.

"Mistakes at Work" is a wonderful, if unsystematic, paper by Everett Hughes, a sociologist who for me has always been something of an inspiration. This is a 1950 paper.

Hughes points out in his study of then professions, because what we did was study professions and other occupations--we did not yet study medicine other than as a subspecies of the profession in 1951, but Hughes points out that all work is composed of routines and emergencies, routines handled with aplomb, reinforced routines and a sense of competence and that this is the way that we ought to do things.

Emergencies, when handled successfully, create routines, which then reinforce the sense that we have new levels of competence, and that maybe this is not as much of an emergency as we once thought, and emergencies mismanaged create a sense of crisis and a sense of where indeed is the boundary of competence and certainly routines mismanaged create emergencies, as anyone who has routinely gotten on a highway and then missed their exit knows.

So that, there is this relationship, this oscillation, if you will, between routines and emergencies, and it is part of medical work, and indeed, it is part of all work.

So much of this is the case that perhaps we can speak of a calculus of error that is based on the skill and the experience of the service provider, presumably those that are more skilled and more experienced, will handle routines with more aplomb and face fewer emergencies, and those emergencies that they do face, they will be able to face more competently than those who are inexperienced.

This is a point that I will return to in that the issue or the major issue in "Forgive and Remember" is how do you let the inexperienced become competent when the bodies on which they have to practice are real human bodies, where their lack of skill and experience might have irrevocable consequences.

Hughes speaks of medical work as it is organized in hospitals, as consisting of a set of risk-spreading and guilt-shifting devices that make error both difficult to identify and then responsibility for error hard to fix.

So Hughes looks at patterns of shared coverage, consultation, supervision, team work as both something that has the upside of improving care, but the very potential downside of making it very hard later to say what exactly happened and why and on whose authority, something that we perhaps no longer think is as important as we move to systematic or systemic approaches to controlling error, but we ought to keep in mind.

Next slide.

A student of Hughes, Everett Elliot Friedson--there should be no "C" there--that is a mistake--wrote a paper called "Processes of Control in the Company of Equals," where Friedson was looking at was the monitoring of physician performance in a large multi-specialty clinic.

What Friedson points out is that large multi-specialty clinic with centralized records would seem to be an ideal setting for supervision, surveillance, and monitoring of performance. If ever there were a case where physicians could exercise that social control of one another that they claim to exercise all the time, one would think it would be in just this setting, but what Friedson found was the monitoring, surveillance, and supervision were haphazard at best. They were governed by referral relations, patient complaints, and colleague gossip, or what I believe he called lunchroom review.

What sanctions existed were largely informal within this clinic. If I was unhappy with what you did, I might issue that stern sanction of a talking-to. A talking-to is just what it sounds like: Why did you do that? What was going on there? What were you thinking?

And depending upon the response of the colleague who was talked to, the person who felt offended by the level of care could then engage in an informal boycott: I will no longer refer to X. That is usually as far as it went, said Friedson. Formal boycotts from the point of view of patient protection certainly have extreme limits to the degree that the judgments of informal boycotts are shared by networks of peers. They simply sort practice networks out into those that I think are okay and those that I don't, but they do nothing about those that I think are not okay.

Then, finally, Friedson said that egregiously poor performers are rarely, if ever, dismissed. Occasionally, there is spectacular error, but only occasionally and then not so noticeable as, say, a plane falling out of the sky, and then there is, of course, the tendency to scapegoat the individual that was involved.

But what Friedson found was even that scapegoating took the form of the face-saving resignation without the letter being placed in the file or the record following.

So, whenever possible in this multi-specialty clinic, which in its time was held up as a model of such clinics and an issue of what could be achieved by effective social control among professionals, social control existed more in the breach than in the performance, and in his conclusion, Friedson chastises all of the medical profession for being what he calls a delinquent sub-community when it comes to policing error.

Where I probably might part with him is that having lived in a university for a long time, it is not so clear to me that any human groups eagerly take on the task of policing error.

Next slide, please.

My colleague at the Penn Center for Bioethics, Donald Light, studied the socialization of psychiatrists, and unfortunately for a patient, but fortunately for his research, there was a dramatic suicide on one of the wards while he was making observation. So Light opportunistically focussed then on the suicide review in this elite training program to see how it was that in a training program, a mistake that quite clearly raises issues of whether a mistake was made or not is handled.

There was a long lag time between the actual event and the review. The lag time was said to promote the cooling down of feelings. The emphasis in the review itself, which was done by an outside external reviewer, was how, despite evident therapeutic shortcomings in the actual treatment, this suicide was not preventable, it was not the fault of the therapist. Those treatment failings had nothing to do with the suicide itself, and what Light points out is a great deal of effort was made in the review to point out how little psychiatrists controlled of their patient's behavior, lest trainees develop inappropriate rescue fantasies.

Finally, Light found that the review itself functioned as a reintegration ritual. It brought the community together, and it firmed the value of psychiatry from the lessons they learned for the future, though exactly how those lessons might be applied if in fact there was this constant belief or underlying belief that what we do does not matter all that much anyway is a question he does not actually address.

Next slide.

Finally, the last moment in the short review is my own work, "Forgive and Remember," which is a 1979 book published by the University of Chicago Press. There will be a second edition sometime this year. It is the study of surgical residents in an elite training program.

There are within it three major themes: membership in a professional group, social control, performance in a group, and the shared patterns for understanding and explaining neutralizing or disciplining error that emerge in that group. It is a field study. It is based on my observations. My sense of its validity and reliability have been enhanced by the fact that the hospital I observed, I called Pacific Hospital. I did that because my Latin told me that was a little nice irony from what I actually observed, but readers, especially surgical readers who reviewed it, decided that Pacific was a geographic reference instead. For the first couple of years that the book appeared, I received somewhere between 15 and 20 phone calls among surgeons who had bets with each other that I had observed in their institution.

They were never right, and because of the agreement I have with the institution, I am not at liberty to say where it is that they now self-identify, but it always gave me pause that you would think people would know that they had been in a study and been observed by an outsider. I guess I was even more in-obtrusive than I thought.

Next slide.

"Forgive and Remember" does a couple of things. The first thing it does is present a typology of error, and it occurred to me that errors could be divided up--this is not unique to medicine, I know, but it fit medicine quite well--into blameless error and blameworthy error.

Blameless errors were technical and judgmental errors. A technical error might be not tying a knot tight enough, all of the kinds of things that you would expect inexperienced folks to do. Technical errors were largely tolerated by residents so long as they occurred only once on a rotation. The second same technical error on a rotation was no longer blameless and, hence, no longer technical.

But the idea I was constantly told of surgical training was to allow the inexperienced to make those kinds of common mistake that one would make in practice and under highly supervised conditions where one would learn not only how to make those mistakes or how not to make them, but how to recover from them if one made them.

Judgmental errors were likewise blameless. They were at the top of the hierarchy. Pacific considered itself an elite place. A privilege of that elitism was that it could do the types of risky surgery that no one else would attempt. Occasionally, when going where people had not gone before, one makes judgmental mistakes or, quote, "those kind of errors" that only an attending can make, but attendings were protected from any negative implications of such judgmental errors by their own high status. In a sense, to keep that high status up, they needed to do some things that no one else would try because that is how progress was made. That is at least how it was explained to me.

Blameworthy errors divided up into two types. The general rule here is that surgery, which was a hierarchical training system operated under a rule of no surprises, the person who supervised you never wanted to be surprised when they came in, in the morning by what they found with a patient that they had left in your charge the night before.

There are certain rules with regard to the treatment of patients that I think are universal and are shared across specialties and normative errors were the kinds of mistakes that breached those rules, fighting with nurses, not seeking help when that help is appropriate, not responding to pages, not getting informed consents, that sort of thing, things that everybody would agree on. The rules were universal.

What gives surgical services their character, however, is very often the unique personalities, as it is framed, of the attendings that head them. That unique personality or leadership style often means that specific attendings have rules all their own. Those rules are sometimes irrational.

However, not to follow them when you are a subordinate on the service is an act of great insubordination. So, to close a wound on a service where wounds are left open, to use paper tape where cloth tape is used, to staple where staples are not used, so on and so forth are treated as if they were breaking an absolute universal rule.

I should say one more thing about the topology. When an error occurred, it does not neatly fit into one of these four boxes. Rather, two things are going on all the time. There is a search for an answer to the Question A, is this an error, or given what we knew at the time, was this a reasonable decision or strategy, and, B, if it is an error, is it an innocent error or not, and the answer to that question is often determined by the response of the person who made the error.

The worst thing that you can do as a resident on a surgical service seemed to be to try to explain what it was that you did wrong.

Next slide.

The kinds of errors, blameless and blameworthy errors, not surprisingly had different kinds of responses. Those errors that could be coded as technical and judgmental, they were more or less considered normal errors. They were predictable. They were expected in the training environment, and they were seen by everybody as an opportunity for correction, for supervision, and for both informal and formal craft wisdom to be pointed out. "Oh, I see what you did wrong. Let me tell you what happened to me when I did the very, very same thing as a resident." So there was both a didactic component to those blameless errors, but there was also a way in which those errors served to welcome people into the profession if they indicated through their learning that they were the kind of people that were not going to repeat those same kind of mistakes again, that is, if they were normatively adequate.

Normative and quasi-normative errors, however, were different. They were seen as negligent errors. They were seen as a signal of personal failure, and they were often used as opportunities for scapegoating and exemplary punishment.

The sort of sad sociological truth to the residency I observed was in order to make what those norms were clear, you needed at lease one public exemplary punishment for the rest of the group to learn from, and this is not to say that there was not more than one opportunity or that the best opportunity was always taken, but there was something about public scapegoating that served a very valuable function in making clear what would be tolerated and what not.

Next.

I think to begin to sum up and move to a conclusion, there are some themes in the sociological study of error in medicine.

First, error is related to uncertainty of difficult clinical choices. One of my subjects once said to me, "You can lead a long and happy life without deaths and complications. You are just going to have to give up major surgery to do so." It seems to me that the larger lesson there is that American medicine is committed to a kind of aggressive intervention that other national systems do not engage in.

There is a relationship between that aggressive intervention and the amount of error that we observe, and it is important to keep that in mind.

One way to reduce error is to ration care very differently than we do, and politically, that is not an acceptable option, but you did not need me to tell you that.

The control of error is a measure of the profession's fulfilling its obligation to society, but what is error is essentially contestable. Too much of the current debate proceeds as if error was a form of bird-watching, and all we have to do is look through the binoculars and identify the species and put them in a chart.

There is a prior step. We ought to be very, very clear that what we are calling an error in medicine is an error, that we are doing it prospectively rather than necessarily. What is retrospectively an error is not the same thing as what is prospectively an error, and we ought to be clear about that, I guess, is the simplest way to say that.

During the time I did my research and even until today, certain normal errors are tolerated, and I take the Institute of Medicine report and the call for a greater culture of safety to say that we are coming quite happily, I think, to an end of that toleration.

Next.

So we want to promote a culture of safety, and the goals of that safety should be to reduce the tolerance for and the incidence of normal errors to create what another one of our speakers will call generative organizational structures that seek out the causes of those errors and act with requisite wisdom to remove defensive identifications, reactions to error identification and correction, and, of course, I guess it went without saying because I did not put it on the slide, to reduce human suffering and to improve the quality of care. Those are, it seems to me, unarguable goals.

Next slide.

But they are not easy goals to reach. First, to the degree that aviation and crew resource management serves as our guide, we need to take into account what Elliot Friedson has called the professional dominance of physicians. There is not out there a lot of good models of effective teamwork in hospitalized care.

I know that that is supposed to be changing, but the captain of the ship model is still more prevalent than most of us would be willing to acknowledge.

Second, the cost of redundant systems or new monitoring systems in an area of cost containment will be politically contentious.

Ken Shine said earlier that error reduction is not about managed care, and he is surely right about that, but it may be about prospective payment. It may very well be that there is a relationship between the fact that we have fewer ancillary staff and hospitals, that the acuity indices have gone up, and at the same time, that support staff has gone down, there may be a relationship between that and our current discovery of error as a problem to be fixed.

Finally, it seems to me that we also ought to recognize the inevitability of error in a medical care system dedicated to aggressive intervention. That does not mean that we should tolerate it, but we ought to be clear about what the relationship is between our commitments to aggressive interventions, especially in places where our hope very often outruns our wisdom and what we later call error.

Next slide, please.

And it would not be reasonable to stop without talking about the promise of the culture of safety. Those promises include, but are not limited to, a more genuine partnership among all stakeholders, patients, providers, and payers, and I think that more genuine partnership means more genuine trust. In the end, I think it means less suffering and more genuine care.

A more efficient delivery system, a higher-quality care at ultimately less cost, whatever the startup costs--I mean, the startup costs here are going to be high. The hope is that we find a way, A, to distribute them fairly and that, B, in the end they make for less overall costs and finally a less stressful work environment for health care professionals which will certainly benefit patients, fewer problems with issues that concern me now in my research, burnout turnover and impairment.

Thank you.

DR. AuBUCHON: Thank you very much.

Are there questions from the committee members?

Yes, Dr. Hoots.

DR. HOOTS: First of all, I would like to applaud you on the book as well as your presentation a very cogent look at what obviously has evolved over decades, if not centuries, in terms of practice and things that evolved, I think, because they worked, but not because they were systemized. Now we have to look at it from the other side of it and figure out what has been, what ought to be, and what maybe should not be systemized out of the process.

In that context, I am really intrigued, and the last couple of points on the last two slides that kind of bring it together is the concept of judgment error as it would relate in "Forgive and Remember" to the attending surgeon where, as you said, there is kind of an understanding that aggressiveness will be applied if aggressiveness is felt to be appropriate. You gave examples of both where it was denied and where it was given.

DR. BOSK: Yes.

DR. HOOTS: Then that confers for at least at that level, when it is practiced at that level, an inevitability of error. There, I think one of the things we have to look at very carefully is the concept that inevitability of error in judgment is a concept, and you have alluded to it already, whether it is a pro- or a retroactive determination of error because what happens on that clinical edge--and it happens to surgeons probably most, but it happens in every branch of medicine, is that once a patient is very ill, one of the things that has to be determined is how far you do push and then what methodology you use to push.

I think the way we traditionally do it and one of the things that does work, and you alluded to this in your last slide, involve in the process the players.

On the drug side, we have very tight controls. We have the FDA saying this is an IND and you cannot use it unless you follow these prescribed things.

On the interventional non-drug side where it is a practice, we do not have the same constraints, but what we try to do, I think, and what maybe we ought to think about even to a greater degree is saying how that communication process of risk to the patient and to the staff and part of the team so that at least somebody finally has to make the decision, but certainly out of that triad something can come constructive so that there is not really an inevitability of error because it is a judgment, and it is a judgment that is made collectively. Therefore, only in a Monday morning quarterbacking and perhaps by a plaintiff's attorney would that ever be seen as a judgmental error. I think that is part of where we get into problems is that people do not do that proactively.

DR. BOSK: One, I would like to agree with that, and, second, say that even if we do not do that as proactively as we ought to, we certainly do it a whole lot more now than we did when I was doing my field work.

Through the device of the ethics consult or the ethics consulting service or what have you, there is at least the opportunity to force a discussion because we have a much more muscular idea of informed consent than we had in the 1970's when I was doing my research. You get a difficult situation. An attending would repair to his tent and say I will come back and tell you my judgment later, and then when questions about that judgment quite literally would say there are decisions that only an attending could make. If you had said how do you make them so you are an attending you will know or on what basis do you make them, are you challenging my authority--I was observing a system in which the most potent weapon that a subordinate had was the gentle question. "Sir, I see that you are planning to do X, but in the clinical series reported by Dr. X somewhere else, he reports that that never works." If the attending says, "Well, I was a resident with Dr. X and I would not trust a thing he ever said," then that was it. That was end of discussion.

One of the things of a more muscular informed consent and the development of clinical ethics have done--and in other places a critic of the development of clinical ethics, but for these purposes, one of the things that clinical ethics has done is allowed the playing field to get leveled by allowing these issues of full hearing. However, not enough has been done to make patients aware of that as an option, to make families aware of that as an option, and there still remains something like fear of reprisal by both subordinates and nurses when that is done.

DR. GUERRA: I also want to thank you for what was a very enlightening presentation and one that captures some wonderful history in the evolution of biomedical science, but it did not quite get us to where technology is today. How would you factor that in, in dealing with some of these important issues related to errors?

I think the advance on the technologic side and also with some of the decision-rule type of algorithms, they carry you so close to resolving some of these more complex dilemmas.

DR. BOSK: The first comment is you are right. Obviously, the world has turned a time or two, and a lot of this needs to be reupdated and relooked at.

My question would be--and it is a sincere question--to what degree do old traditions of clinical individualism and old traditions of this is my patient inhibit the use of that technology, to what degree do treatment algorithms--to what degree in the field sort of on the line are folks aware of treatment algorithms, to what degree do the ways in which the patterns of individual responsibility that folks learn while in medical school make them want to be risk-takers when they think their patient is not going to respond to the treatment algorithm.

The treatment algorithm may say that, but this nice folk has been my patient for 10 years and I know that she will not respond to that. So I am going to do this instead. Clearly, that is work that needs to be done. I just do not know the answer.

MR. ALLEN: I noticed the majority of your report is based on the effect this has on the physician, and I am curious about what effects this has had on the consumers, if you did any follow-up on that, and also what kind of interactions between the physician and the patients occurred after these said errors occurred.

DR. BOSK: In the 18 months that I did my field work, I actually never saw any interaction between a physician and a patient about a mistake. Mistakes happened to patients, but that was not seen by my subjects as sufficient warrant for actually informing patients about that because, after all, if it had not happened to this patient, it would have happened to some other patient. Why wake a sleeping dog? I do not mean to use that in the awful ways that it could sound, but why wake up the patient to this fact? We will handle it internally. This is a professional matter. It seemed to be the way that that was dealt with, and I actually had lots of questions about that as I did my study because there was a whole group of patients at this elite training center that I was looking at that were referred to by the staff inelegantly, but accurately, as "dumps."

These would be patients that came in from another hospital in the neighborhood with their name sort of attached by a piece of paper to their hospital jammies, but not necessarily with a chart or a record of what happened. Occasionally, there might be a treatment.

I used to look at the dumps, and I could understand at one level why you might not want to go after George who lives with you, who has screwed up, but who you see every day and who is also aware of where you screwed up. I can understand why the colleague that you worked with and that you saw daily, you might be gentle about, but I would ask a lot about the dumps.

You can believe that as a sort of curious outside observer, the dumps puzzled me because it seemed to me that the dumps were an easy way to heroically make a stand for the cause of patient care, so that this heroic and grandstanding was not done. It kind of puzzled me, and I asked about it a lot. The answers were always two.

The first was whoever did this, unless they are really a bigger fool than I think, know that they screwed up and they sent for help, and if I go after them, then the next time they screw up, they have an incentive not to send the patient where the patient can get help. That was explanation one.

Explanation two was the legal explanation that we have heard adverted to over and over again, and it went something like this. Life is short, and I was not there. If I begin to complain to the County Medical Society, tell the patient so they can institute malpractice, whatever, I am simply creating for myself a world of pain, a world of pain and a world of counter-suits in which no one will ever be helped.

To the degree that this fits the pattern of a conspiracy of silence that the Institute of Medicine report talked about, then a conspiracy of silence there was.

I think it was you could speak of inter-institutional patterns of openness and intra-institutional patterns of silence, and we need to find ways to break those down and we need to find ways to communicate to patients.

I think the latest data tells us that medically, legally, it is safer to tell your patients what happened rather than to have them have to institute an active search for the truth because that is where you get into big liability troubles.

DR. AuBUCHON: We are going to have to abbreviate this discussion.

Dr. Guerra?

DR. GUERRA: If I could ask Dr. Bosk very quickly, could you do a similar kind of field study and observational methodology in an area other than surgery, pediatrics, for example, or geriatrics, where you are dealing with some very different kinds of issues?

DR. BOSK: Yes. I mean, the short answer is yes. The longer answer is it is more complicated, but colleagues of mine have certainly done field studies in obstetrics gynecology, internal medicine, pediatrics, pathology. They have not all focussed on mistakes, but if there is a human group and if it develops a notion of what is an error and if you are sufficiently sensitive to the nuances of that group, it is both its notion of an error, formally and informally, and how that notion fits well with larger social pieties about error and professional obligation should become clear.

DR. AuBUCHON: Mr. Allen, last question.

MR. ALLEN: Basically, I just wanted to comment on your last comment to my question. What you are saying leads me along the lines that mandatory reporting at this point in time still needs to be necessary versus, in a lot of cases, voluntary.

Just based on what you said, it gives me the idea that some of the reasonings why a physician would not follow up on a report to another physician, that would not happen with mandatory, or even if it did not--

DR. BOSK: I am not sure I agree, and let me do it quickly.

Voluntary or mandatory reporting to work, you need a lot of work at the culture of the shop-floor level in terms of a new set of openness about what a mistake is. Both mandatory and voluntary reporting systems still leave the public at large at the mercy of the integrity of the individual physicians.

Just Sunday--I missed Monday's Times, much to my regret--having heard about it a lot, but Sunday's Philadelphia Inquirer contained a very interesting report. Pennsylvania has had a mandatory reporting law for 13 categories of adverse event. During the first 18 months, the 35 busiest hospitals in the Philadelphia metropolitan region reported amongst them all one adverse event.

Now, that means they either underestimated by 10,499 or 14,499, depending upon whose estimates of error you used, but, either way, one is probably an underestimate, and that is with the mandatory reporting law.

So I do not think just simply putting the law in place without doing the groundwork at the level of the work group about what it is that you report and why it is important and a whole lot of other things--I do not think the voluntary/mandatory distinction is the one that you should hang your hat on. It is, is this an adverse event and can you see it in front of you is the one where I would hang my hat.

Thank you.

DR. AuBUCHON: Thank you very much, Dr. Bosk.

DR. BOSK: Thank you.

DR. AuBUCHON: We have talked a lot this afternoon about having an appropriate culture to deal with errors and their discovery and resolution, and our next speaker, Dr. Ronald Westrum, will be dealing with that directly on the cultures of conscious inquiry.

Dr. Westrum?

DR. WESTRUM: I very much appreciate the opportunity to be here today with you to talk about this subject which is a very interesting one to me, in which I have tried to make myself the master of. I admit my understanding in this area is rather imperfect, but I had planned to share what understanding I do have on this subject with you.

Next slide, please.

Now, one of the things that I think is very useful to think of in the medical context regarding the kinds of events in which we are interested in transfusion or other areas of medicine is to imagine that the medical event is surrounded by a human envelope of care, and what I am going to talk about today is differences in the way that this envelope functions in different kinds of cultures.

I think one of the primary things that is important to understand is the Reason model. How many of you have heard this expression, "the Reason model of accidents"?

This is something that I had hoped that Bob Helmreich would cover, and I think, unfortunately, the loss of Dr. Helmreich is very serious, but to make it very brief, Jim Reason is a psychologist who has done studies of organizational accidents. One of the things that he has found is that typically the accident is something that precedes by, first, an accumulation of unseen dangerous situations.

An everyday example would be, for instance, the magazines on the top-of-the-basement stairs or the piano in front of the fire exit.

Organizational examples might well be something like a taped-over alarm or an ocean liner with too few light bulbs.

Reason called these kinds of latent errors, "latent pathogens," and organizations, as we will see, have very different kinds of responses to them.

Steve, could you just skip one slide and put the latent pathogen one up there?

Reason says basically that a latent pathogen--and obviously, we borrowed this concept from medicine, shamelessly--is a hidden unsafe condition. It may lie dormant for a considerable amount of time, and it may only be involved in accident causation when the operators make an active error.

Many of us, of course, tend to pounce upon the active error as being the thing that causes the accident. The point that Reason makes is that this active error is something that frequently interacts with the latent pathogens that have built up. So what happens is that this active error is either not corrected or sets in train a series of circumstances that bring about the accident.

Reason has looked at a number of major organizational accidents in transportation, such as the Trident crash in Canada and the sinking of the Herald of Free Enterprise. All of these accidents, Reason has found, were things that were triggered by mistakes at the operating end, but had been prepared, as it were, by a series of previous mistakes in judgment and leadership often at higher levels in the organization.

So an accident, then, is frequently the result of the combination of these latent pathogens and an operating mistake, and this is really a critical concept to assimilate because the latent pathogen then is for us in the aviation accident field one of the key concepts that allows us to understand how it is that organizational accidents unroll.

Now may we have the former slide, Steve?

This is one of the things that I think is very important to grasp when we are looking at accidents and organizations. Organizations respond very differently to assign that there might be a latent pathogen.

On the left of the chart here, we see what I would call denial responses to the problem that the person who is bringing the problem to public attention is suppressed, that in a lesser form of this, the person is isolated and made to sort to defend this on their own.

A somewhat more positive response is for the organization to admit that, yes, while there is a problem, but it is really not as serious as you think. I call this a public relations approach to anomaly.

Then the organization may actually fix the presenting problem, a local fix, and that is good, of course, but the issue is how many more things like this are there around. One of the values to a system of voluntary reporting is it helps you to understand how many more such situations there might be.

Then there is a global fix, and one of the values, I think, that aviation has given us is the idea that if you find a problem with one particular example of an airframe, it is probably true in all of them. So you go through the whole series of airframes and you fix them all. I think this is a lovely thing that I think is in part a legacy of the aviation system.

Finally, there is the possibility of inquiring into the basic causes of the accident, why did this take place, why are the underlying forces that brought this about, what might we do to put them at bay.

Next slide, please.

It is essential in my scheme of things to understand that I see cultures falling into three categories, and I admit that what I really have here is a continuum of cultures, but I think for discussion purposes, it is useful to think of them in falling in three types, pathological, bureaucratic, and generative.

A pathological culture is oriented toward domination and power. The single most important thing in a pathological culture is the protection and ennoblement and celebration of powerful people. In a culture like this, information is hoarded. It becomes a personal resource. Messengers are routinely shot. People avoid responsibilities because that is, of course, you can become a scapegoat. Failure is punished, and new ideas are crushed. This is a matter of course.

Next slide.

Fortunately, most of us do not work in a culture like that, right? We work in a bureaucratic culture which is more oriented toward rule-following. In the bureaucratic culture, the key thing is that you do things according to the rules. The outcome is not so important, but if you do things by the rules, it is okay.

In such a culture, information may be neglected because, after all, it may not fit well into the categories we have previously established. Messengers in a culture like this who tell us things are wrong are tolerated, but not necessarily encouraged.

Responsibility is narrow. It is perfectly possible for someone to say, "Well, gee, that is not my job." The organization, when it does find a problem, though, is just and merciful, unlike the pathological organization. A mistake doesn't lead to a witch hunt. It simply leads to a trial. In these kinds of organizations, new ideas kind of create problems, although they may be eventually adopted.

Next slide, please.

What we would like to have is a generative culture. I must admit that at the beginning when I proposed organizations like this, I seriously doubted their occurrence in certain kinds of fields, but I actually discovered there are generative organizations out there because I presented this talk to a number of different audiences with different subjects, of course, and discovered that, yes, there really are generative cultures that are oriented towards results and outcomes. That is the key thing.

A generative culture is much less concerned about the rules or about people's little fiefdoms and so on than it is about the outcome, how is it going to work out in the end, and in this case, of course, for the patient who is getting the blood transfusion.

Information is cultivated in a culture like this. Free-flowing information is absolutely critical to maintain a generative culture. Messengers are trained. People are empowered to make people know that there is a problem.

Responsibilities are shared. Rather than people saying, "Well, gee, that's not my job," somebody will look over someone else's shoulder and say, "I wonder if you could do that a different way," or, "Is this the best way to do something?"

When there is a failure, you tend to see it as a system failure rather than the failure of the operator at the sharp end who has just screwed up, and finally, new ideas are welcome. I think this is a lovely kind of organization, and we will talk more about this later. There are generative cultures, in case some of you doubted that.

Next, please.

One of the things that bothers a generative culture is it tends to promote a culture where there is conscious inquiry, and in conscious inquiry, you really turn the organization into a kind of learning system. The organization is really interested in why it does what it does and how it can get better at what it does. So, to do this, you have a lot of sharing of information. There is a labeling and monitoring of system states. So you know what state you are in. This is a very important thing when it comes to error because we have found repeatedly that people often do not understand either the amount of error that is going on in the system they have got or what parts of the system are likely to have the most error or what kind of errors people are making.

One of the things that we have found is that when we look at this, we discover that there are errors that we did not suspect that are quite common. We will give some specific examples. So it encourages people to identify problems, and it empowers people to fix them.

The human envelope, then, around the medical event in a generative system is one that is likely to be a very sturdy and solid--and if there are errors, and, of course, there are going to be cracks in any system, they tend to get repaired rapidly.

This is one of the things I want to emphasize. When you have a generative system, one of the things is you tend to repair the latent pathogens. They do not just build up. They tend to get fixed, and so this is a key thing.

More often, though, we have a bureaucratic human envelope, and here we have some serious fault. The thing about bureaucratic systems is that they do not naturally, like pathological systems, tend to create latent pathogens, but they can be very slow in fixing them. They will appoint a committee or there is some other kind of delaying tactic, and we will get around to this eventually, but this is not the problem for today.

So the bureaucratic envelope is going to have, then, a lot more latent pathogens in it than you would expect to see in a generative system.

The human envelope of a pathological system, though--and I have purposely dramatized this a bit--is likely to show a great number of faults, and the people who work in systems like this frequently have a kind of heroic character because they can make things work in a situation where the environment is not necessarily helpful to them in doing things.

By the way, one of my friends is a consultant, and he absolutely doubted that there were pathological systems. This is a person that had extensive industrial experience.

Well, God decided to teach him a lesson. So his next client had a pathological organization culture, and he has been calling me ever since asking me what do I do, what do I do.

The human envelope around a pathological system, then, is going to be very faulty, and it is going to have a considerably number of problems that are going to be unspotted and untreated.

Next slide, please.

The question that I want to pose, then, is given these three cultural types, all of you have been thinking while I have been talking about this about your own organization and where it falls on this continuum, and if we had more time, I would do what I have often done in such gatherings and ask people to confidentially report what is their own environment like. The responses are often quite interesting.

I would like to concentrate instead on some of the areas in which medicine has been moving toward a more generative culture and some areas in which I think there has been some really significant progress in dealing with errors and particularly errors that are associated with the way the organization as a whole functions.

One of these is the improvement of human factors, and human factors is the spatial and physical arrangements that are used to carry out whatever the operation is in question.

I think an excellent example of this is Jeff Cooper's critical incident analyses of anesthesia mishaps which led to better design of anesthesia machines and a better design of the operating environment for anesthesia. This is a marvelous example of somebody saying, "Well, gee, it does not have to happen this way. What can we do to improve it?"

Cooper studied in great deal the errors in a very large number of operations that were disclosed by essentially something like a voluntary reporting system. He got people to talk about operations and what had gone wrong and began to look at both the nature of the mishaps and what might be done to improve them.

Next slide, please.

Another area which I had hoped that Bob Helmreich would address at some length because he is literally the world expert on this subject is true resource management. In addition to human factors which deals with essentially things like layout and tools and so forth, there is the issue of human performance.

One of the things I tried to do some years ago was do a comprehensive search for studies of team work in medicine. This was not a very successful enterprise, I have to admit, and I am still looking for the book on medical teams that somebody should write. I think that would be very much a priority project.

But I find it very encouraging that people like Gaba, for instance, David Gaba, as an anesthesia used crew resource management which we have developed in aviation and turned it into crisis resource management to improve performance in anesthesia.

Those of you who are familiar with Gaba's work know that he has developed simulators. He has trained people to use these simulators, and I have actually seen films of some of the things that he has done. I find them very impressive, and I think it is very good because, of course, in aviation what we learned is that the first time somebody encounters a situation like this should not be with a real airplane. We have them encounter situations in simulators which both test their ability to handle crisis situations and also train them to deal with these things.

Of course, one of the things we found most interesting is that the simulators actually provide quite often a very high level of reality. So people take these situations very seriously, and if the simulator crashes, they actually feel for a moment, anyway, that they have been in a real crash. It is very disturbing to them.

The same thing, by the way, seems to be true of the anesthesia simulators which Gaba and other people have been using.

Next.

Another thing that I find as a very encouraging trend is the development of voluntary reporting systems. I think if you are going to deal with problems and you really want to see error management take place, you have to make the problems visible.

So one of the things we have learned in aviation is try to identify what kinds of situations are accident precursors. So the group will learn to monitor its internal states, and they can tell whether they are in a situation which is good or not good. I think this is a very important point not only for the immediate crew resource management, but also as a way that the system as a whole cannot begin to understand what kinds of problems it has.

I know that later today, Jim Battle and Hal Kaplan are going to go into this in much more detail and are doing some excellent research, which I think you will find very impressive.

Last slide, please.

Finally, I wanted to talk about one area which I think there is still some problems in aviation, and I leave you to tell me what kind of problems there may be in medicine. That is the control of rogues.

I define a rogue as a practitioner whose actions are questionable, but who is not held accountable. I think the earlier examples that Dr. Bosk talked about with attending physicians pulling rank and so forth are a good example of the possibility for people behaving badly, but Dr. Nightingale has encouraged me not to give some examples from medicine, which I was going to, for a variety of reasons.

So I would like to bring this discussion to some kind of conclusion, and then I am sure you have some questions.

My attempt is to try to characterize the kinds of cultures that organizations have. One of the things that I and some other consultants have done is to try to develop a scale for some of these kinds of cultures so we can measure the existence of these cultures in corporate settings. This is something obviously that a lot of people have done, not just me.

I think one of the things to appreciate is when you start off to change a system, you might ask yourself what kind of culture is it, because if you are starting off in a generative culture to create organizational change, obviously the organization is going to help you in a variety of different ways.

In a bureaucratic organization, much of that help is not going to be there. In a pathological situation, you have to understand that often from the point of view of participants in the system, particularly the powerful ones, the change situation really offers an opportunity to behave badly, to even up scores, to shift power, to remove resources from people they do not like, and so on and so forth.

I think one of the under-evaluated issues in organizational changes, how do you approach organizational change when the organization is in a relatively bad way.

So, to conclude, then, I would like to suggest that one thing we need to think about is, before we fix the medical problems, we may need to think about fixing the organizations that have the problems in the first place.

Thank you very much.

DR. AuBUCHON: Thank you.

Dr. Penner?

DR. PENNER: Would you comment on the purposes of the organization as to what relevance that might have according to what kind of responses or improvements you might be able to make?

DR. WESTRUM: Well, in medical organizations, certainly we would like to believe that patient care and patient survival is--

DR. PENNER: But that has changed, hasn't it?

DR. WESTRUM: Yes. And I think that certainly is an issue in the way the organization is set up. It is difficult to avoid that conclusion.

DR. PENNER: But then can you interact when the purpose, say, might be--

DR. WESTRUM: Yes.

DR. PENNER: --achieving success money-wise or achieving success patient-wise when you have got two different goals? Then the impact is going to be at least a different orientation, I would think.

DR. WESTRUM: Right. Basically, if you look at the diffusion of innovations, which I think is very relevant for looking at safety schemes here, what happens is we find that the early adopters tend to be healthier organizations. They tend to be more effective.

They also, interestingly enough, tend to take from whatever the innovation is the relevant parts for them. They customize it to their own operation.

The late adopters go along with everybody else because everybody else is doing it, and sometimes the organizations that you want to reach who are not too interested in doing these things have to as a follow-the-leader kind of thing, but they will typically do it later in the organization. It is always easier to reengineer the mission statement and stuff like that than it is obviously to improve results, and, hence, the interest in mandatory reporting systems whether they work or not.

DR. GOMPERTS: Dr. Westrum, when one looks at these three different organizations, clearly a generative organization has been shown to be highly successful in industry. New company startups who are going to function in a pathological mode as compared with the generative mode, there is no doubt in most situations a startup generative organization structure is most likely going to have a greater chance of success.

DR. WESTRUM: That is right.

DR. GOMPERTS: So, if one looks at the health care industry or one looks at certain health care environments in order to promote one versus another, there would need to be a financial incentive. The more likely positive outcome for the patient would be better if there was a financial incentive for a health care organization to function in a generative mode.

Somehow or another, if what we are talking about is going to result in some benefits, somehow or another, there would need to be financial incentives if ultimately we are going to reach those goals.

DR. WESTRUM: One way that this might happen is their reputation for safety. Obviously, we have seen cases where there have been--even in the blood transfusion area, some operations have been very questionable, and we have seen what happens when people lose faith in the safety of the blood that they get.

It is kind of interesting. For instance, I think the Krevor report is probably to blood safety what in aviation the Dryden accident was for us, the same kind of density of investigation took place around, and I think that certainly for the blood safety community, one of the priorities might be to look at the Krevor report and say what are the organizational lessons that we can learn here.

I have been looking over at the request of Dr. Nightingale, and one of the things that I have seen in this report is there is a very large amount of bureaucratic behavior. I think this is a very kind of disturbing thing, especially when you see some of the actions that got taken, but I think reputation for safety is a very strong motive for improvement in this area.

I am not sure I answered your question. Did I?

DR. GOMPERTS: Yes.

DR. PENNER: That might lead to looking at it from the other side, an incentive versus a disincentive, disincentive being in two modes. One would be legal action because of problems which would be an disincentive not to improve things or by, let's say, interaction on a public level where one would have a disincentive--

DR. WESTRUM: Yes.

DR. PENNER: --through some sort of standard set up by the Government.

DR. WESTRUM: Yes. In fact, that is a very interesting point.

Basically, what I see the change process going through is four stages. The first one is a stage of ignorance. You do not really know how bad the situation is. So you do not really know what kind of problem you have got.

The second stage is the stage of inquiry, when you begin to realize that there is a problem, and you start doing the first investigations into how bad it is.

The third stage is awareness where basically through a process of osmosis, innovation dissemination basically, people become aware as they did with crew resource management in aviation that there is a standard, an implicit standard of practice, and the fourth stage is when that standard is in some sense adopted by some higher body. This now becomes the way people are expected to practice. It is not this is a high bar, but this is actually the accepted standard of practice.

DR. HOOTS: In following that up, it seems to me that if you at some point put the punitive too high, then you almost automatically eradicate potential for generative. So don't you have to be very careful?

Rules are rules, but if the implementation of the rules carry strong punitive components--

DR. WESTRUM: Sure.

DR. HOOTS: --then what is the incentive for the team to do anything but avoidance if all avoid punitive action rather than actually be creative, doing it for the right reasons?

DR. WESTRUM: Right. Well, I think you are quite right about generative systems not emphasizing punishment, but the truth is the bar is going to move. The standard of practice is going to go up as time goes on, and it is inevitable at some point that the standard is going to be legally enforceable. It may be enforceable in liability suits. There is a whole variety of different ways in which what was once an elective thing to do now becomes an accepted part of safe practice.

People are likely to be held accountable to that standard, whether or not it is part of legislation or whatever. So what is happening basically is that the world is changing.

DR. AuBUCHON: I would like to thank Dr. Westrum for slogging through the snow here today to join the other speakers in the first session to give us a really excellent overview on error management.

We will now have a 15-minute break, and we will reconvene to focus more directly on transfusion medicine.

Thank you.

[Recess taken from 4:17 p.m. to 4:37 p.m.]

DR. CAPLAN: Could we get our panel to reconvene?

[Pause.]

DR. CAPLAN: [Presiding.] Is Dr. Linden here?

The fastest way to actually get us reconvened is if you stand up there and look like you are ready to go. Then they will be ready.

If we could take our seats. The Chair is very pleased to be here. Thank you, Jim, for running the meeting. Sometimes I say I am very pleased to be here, but I am really pleased to be here. Crossing the Northeast at 10 miles an hour was not the most fun experience I have had in my life.

Anyway, we are up to Dr. Jeanne Linden from the New York State Department of Health who is going to talk about human error in transfusion medicine. So why don't we just start.

DR. LINDEN: Could I have the slide on, please? Or, maybe I can just advance it.

We have a mandatory reporting system for transfusion-related errors in New York, and I was asked to come and speak to you about that. I would like to start out by telling some individual case histories of some of the events that we have seen to sort of give you a feel for some of the types of events.

I chose some illustrative cases. One involved a 30-year-old healthy male who was in a motor vehicle accident and was not identified by name when he went to the hospital. In the surgical intensive care unit, he received a unit of Group A red cells, although it turned out he was actually Group O. He did suffer an acute transfusion reaction that was not recognized by the residents.

The nurse actually reported the signs and symptoms, which I guess actually were signed in this case, to a PGY-2 resident who in turn consulted with a PGY-5 resident, but they decided it was not a transfusion reaction, it was something else.

So they then transfused a second unit of Group A blood. When this was investigated in some depth, it turned out that this patient and another patient being seen at the same time were assigned sequential identifiers. One, the medical record number which actually was 611 and 612 with four other digits that were identical preceding that--I have abbreviated here for purposes of confidentiality since these are the actual numbers, but what this hospital did in identifying patients without a name when they came in is they assigned initials and they assigned them sequentially. So that, these two patients wound up with extremely similar identifiers that certainly we sitting here in this room recognize this was really sort of an accident waiting to happen. This is actually something that we see very commonly that the medical record numbers are assigned sequentially. So that, the patients wind up with very similar numbers, and then if they happen to have similar names, it can be very difficult for the humans to identify things.

What happened in this case is the blood bank tech is actually the first person who really made an error, and the blood bank tech actually issued the wrong unit. They issued a unit for the wrong patient, and they had a computer system in which the tech actually bypassed in this case. Then the nurses in the SICU did not properly identify the patient and did not realize the error.

So, in this case, at least six different people made errors, the blood bank tech, three nurses, and the two residents who did not recognize the reaction and did not intervene.

This patient actually wound up dying about a month later, although not in my opinion from the incompatible blood. It is, of course, difficult to tell the attorneys that, but previously there was mention of whether reporting is ever helpful in terms of the liability aspects. In this particular case, when I reviewed the case, I did render an opinion to the hospital that the death was not in my opinion due to the incompatible blood, and that was helpful to the hospital. There was actually a settlement.

This next case that I would like to present involves a unit of blood collected by a hospital which nationwide is, I believe, about 6 percent of the units. In New York, it is actually slightly higher. It is 10.4 percent.

In this case, the testing was not done by the hospital. It was done by an outside lab, and a blood center was involved in reporting the results. Because electronic reporting was not available, the results were transmitted by fax.

The upper section, Letter A, is what was received by the hospital initially, and I don't know whether people can see that, but if you look at what is in the box, it appears to say O-positive. If you look down at Letter B, this was a second fax set on a high-resolution setting. You can see that what was supposed to be transmitted was actually B-positive. In fact, if you look over at the first column on the unit numbers, which again are abbreviated for confidentiality purposes, you also see that the "8" appears to be a zero. So that this also showed that there were problems with numbers.

In this particular case, a B-positive unit was issued as an O unit, and not only did we find that there was a problem with a facsimile transmission of results, but the hospital is, of course, also supposed to do an ABO typing on the unit, do a second check. They collected it, but they were supposed to be doing a second check. They were doing it after the unit was already labeled, and there was no evidence that that second testing was ever done. So that is how the unit got released.

The only other point I wanted to make here is in reference to infectious disease results because we have seen HIV-positive units released because of misinterpretation of laboratory test results going from one facility to another, one by fax and two just because if you look at these columns, the NRs--if you have an "N" with all the "R's" mixed in there--if you have just an "R," it doesn't really stand out. If you have reactive mixed in with nonreactive, it doesn't really stand out.

The other thing we have seen is if there is a pending result and the result is left blank, which is, of course, what happens when there is initial reactive, those blanks may be overlooked by people looking at these results. So those are some of the other types of things that we have seen.

This case involved a 41-year-old otherwise healthy man who was undergoing a laminectomy, and following surgery, there was about 2,500 mls of blood loss.

The surgeon used a postoperative blood recovery device, as surgeons tend to like to do, and there was about 150 mls of sanguinous fluid collected along with air in this bag. It was a change of shift, and the person who really was familiar with how to use this device was going off shift and the staff coming in asked for sort of a mini in-service. The person who knew how to use it spent maybe a minute or two explaining things to the other staff, but they really had not been trained using this device, and they infused this blood and air under pressure and the patient promptly had a failed cardiac arrest due to a massive air embolus.

Certainly, in this case, there were multiple problems. One is certainly the system's flaw that a lot of these devices, particularly the manual devices, do allow air into the system. It does have to be manually evacuated, which is sort of an extra human intervention step that it is easy to overlook.

Here, of course, there is also the question of is it really beneficial to try to infuse 150 mls of sanguinous fluid that probably actually has very few red cells in it, and certainly, there has been a lot written in the transfusion medicine literature questioning whether postoperative salvage is even useful.

This was illustrative of several cases that we did observe. I know you got my paper on this. So I will not go through all of them, but this was a common problem. We were seeing one death a year due to air embolus associated with interoperative blood salvage. So we sent out a series of actually four different warnings, and we asked hospitals to designate a particular physician to be in charge of this area to take some responsibility and exert oversight. Knock on wood, we have not had any more of these since 1995, not that I am claiming credit for that. I am just mentioning that people have been successful in preventing them.

The last case actually has to do with one of the other hats that I wear overseeing the tissue banks that do business in the State of New York, but it is illustrative of the same type of thing that we see in transfusion medicine.

This was an embryology laboratory. There were two patients undergoing IVF on the same day, and the embryos from both of these patients were on the warming stages, like a hot plate, simultaneously, and there were actually four petri dishes. Patient A only had a few embryos, and those were all going to be used for the particular procedure, but Patient B had a lot of embryos. So there were good embryos to be used today. There were some to be frozen, and there were also some to be discarded that, when they were examined, were really not quite up to snuff according to the standards of the embryologist. These are not in writing anywhere.

So there were actually four petri dishes on this warming tray, and as you would expect, since I am telling this story, he grabbed the wrong ones, put them in the catheter, gave it to the physician, and the wrong embryos were implanted in this patient.

What is really shocking is he went back then to the petri dish again, the warming tray, and he realized that he had made a mistake. Rather than say anything to the physician, he just then took the correct embryos and implanted those as well. The physician did not question getting a second catheter because it is not uncommon in IVF procedures to have the embryos stay behind in the catheter and you check to see if they have actually been transferred or not. So it is not uncommon to get a second one. So the physician did not really question anything.

What actually wound up happening--you probably heard about this on the news--is that Patient A did have twins that were of different genetic parentage, and in fact, they happened to be of different races. In fact, the baby that was not genetically hers actually proved in fact to be the other couple's, and in fact, they gave it up voluntarily in terms of custody since the other patient had not become pregnant.

When we investigated this, we found that the facility actually did have an SOP that said you never handle embryos from two different patients at the same time, but this particular embryologist was not following those because he rented the facility for the day. He did not actually work there, and with embryology labs, this is not uncommon to have the shared facilities and have someone else come in. He apparently felt that he did not need to follow procedures like that because he had a Ph.D.

DR. CAPLAN: And what is your point?

[Laughter.]

DR. LINDEN: So, keeping these examples in mind as a background, I am going to tell you a little bit about the reporting system that we have in New York.

This was started back in the late '80s. Actually, the regulation itself went into effect in 1989, and we did this because we had some of these incidents and we realized that this was a real problem, something that we needed to look at.

So, in terms of the blood banks, we have about 275 facilities in New York or doing business in New York, and this is a mandatory system. The facilities are required to report to the State health department.

It is primarily a passive system, but when I say it is validated during an on-site survey, what I mean by that is that there is an active component. When we inspect these facilities every 2 years--this is a compliance survey--we look at their own internal error reporting files and the surveyors look to see if there is a letter of acknowledgement from me, and they also ask the staff what errors or accidents, incidents have they had. Usually, even if the higher-up people want to try to cover something up, there is always going to be some lower-level person who is--you know, that there really was a problem.

We also have the opportunity for complaints. We have an 800 number, and on extremely rare occasions, a staff will actually call us up to report something directly that they feel that their superiors have not adequately reported.

Our system is confidential. It is specifically excluded from the Freedom of Information law in our State, and this was accomplished through legislation that we specifically pursued for that purpose.

In most cases, the patient is not identified. There are occasional exceptions to that, including when I need to review the medical record. In most cases, staff also are not identified. Often, people will refer to initials or they will describe that the names are very similar, that sort of thing.

It is also a non-punitive system in that we do not ensue enforcement action based on these reports with extremely rare exceptions, and those are because we have obtained other sources of information and their special circumstances.

We also coordinate with the New York Patient Occurrence Reporting and Tracking System which is for any adverse outcome or unanticipated event in a patient in a hospital regardless. Transfusion events come to us, and that is other type of patient events. There is actually about 15,000 of those in New York. We coordinate with them and in fact are working with them on some group analysis issues.

Also relevant to some of the data that I am going to be presenting are the fact that we collect supplemental data. In New York State, it is also mandatory that facilities report their activities, and by that, I mean the number of units they collected, how many they transfused, how many were autologous, those sorts of things so that we have a denominator from which you can calculate frequencies, relative risks, that sort of thing.

The other point to make in terms of the facilities from whom we get these reports is that Federal facilities are exempt. So we do not include VA hospitals, but those are relatively few in number.

We also do not collect things such as how many transfusions are administered by different types of staff and so forth so that we would not be able to calculate frequency of how many of these events involve different types of staff, although certainly we see how staff coming up awfully frequently.

The scope of the events are basically any serious accident error in the collection or administration of blood that poses a significant risk to the patient or a donor. The things I am going to focus on today are the most significant, the actual events where blood was administered to the wrong patient or the wrong type of blood was administered to a patient, adverse outcomes. We also get disease transmissions, HIV, that sort of thing.

We also have reportable under our system if the wrong blood is issued by the blood bank, but the astute nurse or anesthesiologist on the floor realizes, "Gee, this blood is for John Jones and this is Joe Smith," and sends it back to the blood bank, that would be a near-miss that we would be picking up, but we do not pick up near-misses such as phlebotomy errors that the blood bank may detect because the patient's blood type today is different than it was last week, that sort of thing.

We also have events with a good outcome, but where there was potential for significant harm, such as failure to irradiate units that need to be irradiated and that sort of thing.

We also include transfusion-related acute lung injury, disease and those sorts of things, but I am not going to be discussing those today.

The format that we use is basically a narrative format. We do have a sample form that people can use. Not everybody chooses to use it. The blood centers that operate in our State have their own form that gives us the same information. It is basically narrative, and that is just fine. Most of the hospitals do use the sample form.

Basically, it is a form which was provided to the committee members. It has a checkoff box for the nature of the type of error and then a narrative for what happened, the underlying contributory factors that they may have identified through a cause analysis if they did it, and corrective action. This is basically a narrative form. In fact, a lot of people just say see attached and send us a narrative report.

What we get and what we want is basically a synthesis by the blood bank following their investigation. So they talk to the staff who were actually involved, find out what happened. They do their analysis, figure out what the underlying factors were, and they report it to us. There is usually follow-up reports.

We also go back to them, ask them for additional information. We help to guide them. If there are things that we think that they should be looking at, we ask them to get those things, and that is partly because we want to see it, but also because we want to make sure that they are looking at everything that they should be looking at and we do try to assist them with their corrective actions.

Then, based on these narrative reports, we have an internal form that we code to put into our database, and that has been provided to you as well.

Another point that I wanted to make is that just because someone died following receipt of an ABO-incompatible blood transfusion does not mean that that is why they died. In the first case that I gave, that was a perfect example of that.

These patients tend to be a sick group of people. A lot of them die from their underlying disease, and it is really critical that there be a careful medical review of fatalities to determine whether the death was due to the transfusion or not. Facilities may be able to do that themselves. There may need to be definitely some additional review on the part of the reporting agency, which in this case is my office and I do that. We often find that when things are initially reported as possible fatalities and in fact turns out not to be, then in fact it was actually their underlying disease, although sometimes it could be a slight contributory factor. It certainly does not help them any to get incompatible blood.

Some data. This is a subset of significant events that involve patients actually getting the wrong blood or the wrong type of blood over a 9-year period, and there were a total of 427 such events. When you look at the nearly 1 million red cells that were transfused per year--and this is only red cells. I am not discussing other components, blood derivatives and so forth, at the moment. There were about 1 in 37,000 units that turned out to be ABO-incompatible because of an error, and this is not patients. This is units.

If you look at the 1992 paper that was provided to you, that is actually a fairly similar figure.

I would also mention that it has been estimated that at least prior to testing that the risk of getting any transmissible disease through blood was 1 in 34,000, and people are a lot more worried about that. There is actually an equally high risk of getting the wrong blood.

There was about an equal number of observed units of ABO-compatible blood, and we did an adjustment when one considers the fact that there is a two-thirds chance that a random unit is going to be compatible with the recipient and one-third should be incompatible. We speculated there was an underreporting of ABO-compatible mistransfusions because it was not noticed that there was a problem, and in some cases, it may have been a misunderstanding and people didn't realize that it was reportable. So, if you do that adjustment, it turns out that an estimated 1 in 13,000 units go to the wrong patient are the wrong type.

We observed four reactions that were clearly fatal as a result of the blood. There were others that were more marginal, but those are the ones that I am counting. The 1-in-2,000,000 figure is a somewhat lower frequency than some other figures that you have seen with the Federal database, but it certainly is in the same range, and because the numbers are so small, certainly what we are talking about is something in the range of 10 to 20, or perhaps a little bit more than that, fatalities per year in the United States, if you use these figures.

Those figures, by the way, are also similar to what has been observed in the United Kingdom, the serious hazards, the transfusion report that came out recently.

In terms of the types of problems, this is basically the number-one problem: the nurse administers the unit to the wrong patient.

Over half of the errors that fit into this category occurred outside the blood bank. The number one was that the nurse just administered the blood to the wrong patient without checking the identification.

Phlebotomy errors at 12 percent were also definitely up there.

Blood bank errors were at 30 percent, which is certainly more than I would like to see, that involved both technical testing errors, clerical errors which are primarily transcription errors. Sometimes they tested the wrong sample or, perhaps after separation of the sample, it got mislabeled, that sort of thing, or sometimes they just pulled the wrong unit off the shelf like one of the examples I gave. I think it was the first example where they just issued the wrong unit.

In 15 percent of cases, we found that there were compound errors that occurred in multiple places. Primarily, this was that the blood bank issued the wrong unit, and the nurse on the floor could have detected it, but failed to do so and just administered it because they thought that is what they were expecting.

We also had 41 cases of the near-misses where the blood bank issued the wrong blood, but the nurse did detect it. As you can see from these descriptions, these are primarily what have been characterized as "slips," where the person did intend to do the right thing and just did not carry things out properly.

We do see some mistakes. We have seen a very small number of violations, rogue behavior people intentionally violating the rules. While these are very few in number, unfortunately they tend to have very adverse consequences and turn out to be a big problem for everybody involved.

DR. LINDEN: If one looks at the patients, the subsets who received ABO incompatible blood, about half of them had no adverse effect reported. 39 percent were noted to have had asymptomatic acute hemolytic transfusion reaction, and 7 percent a serologic reaction EODAT (ph) and so forth, 2 percent had a fatal hemolytic transfusion reaction, 4 percent you couldn't really tell, because they died anyway from something else.

I also wanted to point out the detologis (ph) blood is not free from air, that it can also go to the wrong patient just as easily. And we previously saw 1/16,000 risk based on a fairly small number of units. But this is particularly dangerous for two reasons. One is, if it is not their blood, of course, if it's not fully tested, there may be a risk of a transmissible disease. In fact, even tested and positive, even if it says biohazard on it, they're, they're still going to transfuse it perhaps without double checking as thoroughly as they should.

And also if there is a reaction, it may be ignored. In one case, we had somebody with flagrant signs and symptoms of acute hemolytic transfusion reaction. The physician specifically said, oh, it can't be a transfusion reaction. He's getting his own blood. So we'll look for something else, and we'll ignore it.

If you look at the ABO incompatible allogenic units and break it down by the blood type of the recipients and the units, there is a trend that the O recipients tended more to have an acute hemolytic transfusion reaction with 44 percent of O recipients receiving an A unit having a reaction, versus the smallest number being the A patients who received a B unit.

And I would mention that all four of the fatalities were O patients. Three of them received group A blood and one of them received group B blood. The minimum volume that was fatal was 30mls. And using a 2 by 4 type square and then a sixth weight test to proportions, these actually were not significantly different. The high--lowest P value was .09. But this is really because the numbers are too small to yield a statistical significance. And it certainly does appear that from a practical standpoint, there is a trend more towards the A patients having more of a reaction. And this is certainly worthy of further study.

When one looks at how these errors were discovered, about half of them were at the bed side, either as the result of a hemolytic transfusion reaction of 30 percent or 22 percent where after the blood was already administered, the, the nurse realized, oops, I forgot, you know, they checked after the fact and realized the mistake.

Only 5 percent were found as a result of supervisory review. And certainly, we could do better there. 19 percent were cases where the floor was completely oblivious, but the blood bank found the error subsequently because the blood type came back different, because of a phlebotomy error, or something else of that nature, of course, they could also detect a mixed field reaction, that sort of thing. And there were various other miscellaneous ways of finding these.

We also broke down the reports that we saw by the size of the transfusion service. And I apologize that this is so small. Basically, if one looks at these smallest facilities--and I determine size based on the number of red cell units that they transfuse per year, less than 2,000 per year, there was a, about a one third higher risk of at least reporting incidents versus the largest facilities, transfusing more than 5,000 units per year. And certainly, there could be multiple reasons for this. But certainly, a lot of the problems that we find, which I would classify as horror stories, tend more to happen in the smaller facilities that really are not very accustomed to transfusing blood very often.

When one looks at the, some of the underlying contributory factors that appear in some of these cases, there were some things that popped up fairly commonly. One is that the safeguards that were in place were bypassed, most frequently because the wristband was cut off in the operating room or for some other reason, to put in an IV line, and so forth, so that it was very difficult to do the comparison with the wristband.

Another thing that we see commonly is a problem if the people use a card to make pre-printed labels, and then they grab the wrong labels and put them on the wrong tubes. Very frequently, the same or similar names, either in a patient care center or in the operating room, as with the initial example that I gave, consecutive identifiers, again, like the example I gave. Many computer systems make these identifiers sequential. Telephone or verbal communications, also very prone to misinterpretation. Fax communications, as with the example that I gave, very subject to misinterpretation.

Menial procedures for release, more and more facilities now have computer systems, but sometimes they get bypassed. And certainly the smaller facilities that don't have a computer system are more prone to error because of the interaction of human beings. Also inadequate consideration given to the patient's input/output, because sometimes the patient will say, gee, doc, I'm an O and that unit hanging up there is an A. And on occasion that is ignored.

Some other latent systems failures that we have identified include a lack of delineation of responsibilities, knowing of who is responsible for what. So it turns out that nobody does it. Absence of proper standard operating procedures or proper training so that people know how to follow those. As in the example of the inter operative blood salvage device that people didn't know how to use properly.

Also if there is an event, insufficient training in recognizing acute hemolytic transfusion reaction so the intervention can occur, very common problem. Also in some cases, unapproved equipment being available for use. Things like microwaves that can damage cells, things like that.

So--actually, let me go back.

I just want to mention that over time, having done this for ten years now, we have seen some changes. Initially, we see as a corrective action that the employee was counseled, the employee was terminated on some occasions or, you know, we're going to do an in-service training on our SOPs, those types of things.

We have more recently seen less of a focus on the errant staff and much more of a focus on the types of systems problems. We've seen better root cause analyses presented and also definitely more and more of a trend toward more facilities using computer systems that, that automate and bypass the humans, at least to an extent, and other types of automation which, in my opinion, is definitely the, the solution to a lot of these human related problems.

So based on my personal experience doing this for the past ten years, what I think the reporting system should have is that the facility needs to collect and interpret primary information from the individual people who are involved. And if it's going to be centralized reporting, I think the centralized agencies, such as my office, what we want is the synthesis of that information from the facility. The individual facilities have very diverse systems. They know their system best. And they can really figure out what their problems are, you know, perhaps with a little nudge from us.

I also wanted to mention the analogies with the aviation reporting system. I think this is an extremely valuable model that we can learn from. I think it is not directly applicable to these types of systems. One of the problems that we have here is that there are huge numbers, over a million in the United States, of people involved in this process. And as I showed, only 30 percent of these errors were in the blood bank. The other 70 percent involved nurses, phlebotomists, also some other people who may spend a tiny fraction of their day involved with the transfusion system. They're mostly doing other things. They are not vested in the system. They're not particularly knowledgeable on the system. And their input is particularly valuable, but unlike the pilots whose reports directly to the system are very valuable, in my opinion, getting reports directly from those people without the additional synthesis by the blood bank is, is not particularly useful.

I agree, as has been said by others, that a confidential system is important. I don't think that needs to be linked, though, with the mandatory voluntary or coded versus narrative. I think if you want it confidential, you take measures to keep it confidential through whatever legal means you can, and you can also just tell facilities not to put patient identifiers and not to put staff identifiers in, if that's what you want to do. And that achieves confidentiality in a way as well.

Coded versus narrative, I was asked to comment on. As I said, we used a narrative system and we glean a lot of very valuable information from that. If you use a solely coded system, in my opinion, you are going to gather much less valuable information. It is obviously much more cost effective. Distilling out a narrative system is much more labor intensive. Certainly, you know, one can do a combination of both.

But with our system, even the three of us in my office who do coding, we sometimes have to talk about cases, because it isn't really obvious how to code it. And if the facilities are doing it, I can guarantee you, you're not going to get consistency, you know, between the different individuals. So it all depends on how far you want to go, how many resources you want to put into the detail level of information that you get.

I agree with others who said the system should be non-punitive. I agree that's important. Although, we haven't really seen a problem in that regard.

We have had good luck with the mandatory system. But our state happens to be very heavily regulated, people used to mandates. So I can't comment whether that would work in other facilities, other areas.

As I mentioned early on, it is very important that you have access to medical review for fatalities and adverse outcomes to determine whether they are linked or not. Because otherwise, if you assume that they are linked, and it's very easy to fall into that trap, you're going to have a gross over counting of fatalities and adverse outcomes. And if you do want to have any frequency data, you are going to have to collect denominator data on how many transfusions are being done, how many blood collections, and so forth. And that is something that a system, similar to ours, has to offer.

And so in conclusion, what we have observed in New York, is that the, the incidences have actually been fairly steady. And it's really hard to tell whether there's an improvement or not. Because as other speakers have mentioned, when you start doing a better job looking for errors, you're going to find more errors. So that the quality improvement efforts may reflect better error detection and that may be counterbalancing any preventive successes. So I really can't draw any conclusions from the fact that the rates have remained pretty constant.

And what we intend to do is basically plan to continue the effort. We are going to be making some additions to the form to add some additional coded areas, you know, for blood type, and some things like that. We are continuing to coordinate with the NYPORT system particularly in regard to the root cause analysis.

And I'd really just like to say that I'm very, very pleased to see the attention being drawn to this area now. It's something that I've been concerned about for some time. And I certainly look forward to working with colleagues to work on addressing this very important issue. Thank you.

CHAIRMAN CAPLAN: Well, thanks.

Let's do some questions and comments. John?

DR. PENNER: Just two quick questions.

Is there any sharing of information with the Joint Commission for Hospital Accreditation? And then secondly, I'm concerned about, as we're seeing more and more staff reductions in both nursing and laboratory people around the country, do you think that you're seeing some trend in the past few years of more errors or is it still staying stable as you thought?

DR. LINDEN: In terms of the Joint Commission, we do have an agreement with them presently that's a work sharing agreement. And the NYPORT system and ours specifically working together are definitely targeted to meet the requirements of essential event reporting system. And there are discussions underway that, that these reports would meet their needs and would be provided, information would be provided to them. That's not in place yet, but hopefully soon.

In terms of the staff issues, we definitely have seen a trend toward more general staff in the blood bank. And that may possibly be a reason why the blood bank errors are still as high as they are and certainly in my opinion, on the hospital floors as well. I am a big believer in specialists. And we are seeing a trend away from specialization.

And, you know, as I mentioned the smaller more community hospitals, the nurses who are just not accustomed to doing this due tend to be the ones who are more prone to the errors. But it's, it's really hard to draw conclusions based on the type of data that we have. It's more of a gut feeling on my part.

CHAIRMAN CAPLAN: Jim?

DR. AUBUCHON: Jeanne, I've always been impressed with the completeness of your, your system, and thank you for sharing those data. I just wanted to make sure I understand the volumes and the workload that that relates to. You get approximately 500 reports for about 2 million units of red cells transfused. Is that approximately the volume of error reports you're dealing with?

DR. LINDEN: No. We had nearly 500 of these serious types of events over a nine year period. And we transfused almost a million units a year. I think the total denominator was 8.1 million. So there were 4 fatalities out of 8 million, which was how we got the 1 in 2 million fatalities.

DR. AUBUCHON: I see. Okay.

Could you give me an estimate of approximately how many FTEs, effort expended in handling these reports, investigating them, codifying them?

DR. LINDEN: I would think probably about two thirds of an FTE, maybe a little bit less, if you don't count some of the analyses and really sort of more research type things that I do.

When you, when you have some of the really serious ones, they tend to take up a lot of staff time. But about two thirds of an FTE I would say would be about right. But keep in mind it would probably get an order of magnitude higher actual reports, but most of them are of less significance and don't require the same level of scrutiny that these reports do, particularly the reports from the blood center, you know, the blood center reports didn't even appear here.

DR. AUBUCHON: And the last clarification is that the Statue of Liberty is in the Garden State, isn't it, rather than New York State?

DR. LINDEN: Hum? That is true. The Statue of Liberty is in New Jersey.

CHAIRMAN CAPLAN: Larry, do you have anything?

MR. ALLEN: I basically wanted to ask what happens when a patient is aware that there has been an adverse reaction? What's your institutions policy in regarding some type of discussion with that patient or their family?

DR. LINDEN: Well, on the part of the State Health Department, we do not have any policy in that regard. That would be up to the individual institution. I mean, certainly we support, you know, patient notification, disclosure. But there's nothing really that--it's not a mandate.

MR. ALLEN: So there's nothing to tell you whether or not the patient's or the patient's family has been notified exactly what happened and to assure them that that won't happen again? There's no, nothing in place for that?

DR. LINDEN: That often is part of the report, but it is nothing that we can require. So if it's not there, there's nothing we can really do. Sometimes the patients do contact us directly.

MR. ALLEN: One other quick question. What's the cost, can you give me an estimate on the cost of what your department does, what added cost that are added onto the institution to do what you're doing, as far as the tracking?

DR. LINDEN: Well, as I mentioned in response to Dr. Aubuchon's question, it's probably about two thirds of an FTE. And this is basically incorporated into the laboratory and blood bank oversight program which then is passed along to the facilities in, in the way of fees. So I, I can't really say per facility.

But in the scope of the laboratory and blood bank oversight program that we have, it is a really tiny, tiny in terms of dollars, but important in my opinion.

CHAIRMAN CAPLAN: Let's do Fernando and then we'll do Mike.

DR. GUERRA: Jeanne, in the, in the instance where one of these adverse events leads to a fatality, when you have received information about that death before it has been reported on a death certificate, or how reliable is the information on the death certificate in terms of trying to identify a transfusion related casualty?

DR. LINDEN: I have found that the death certificate is not very useful. Our facilities are required to report within five days, but it is usually a preliminary report and we get additional information, more complete report later.

It in large part depends, or I guess it partly depends, anyway, on whether there's an autopsy, how thorough the investigation is by the hospital in terms of the death certificate. The one example that I gave, that was a medical examiner case. And the death certificate actually did say that, that it was due in part to the transfusion reaction, which is one reason the litigation, you know, turned out the way that it did.

But that was based solely on the fact that the hospital coded it that way. So what you put on your, you know, discharge summary can be important. And it is, as I say, not particularly--I mean, it needs to be considered, but it's not always necessarily accurate.

CHAIRMAN CAPLAN: Let's do Mike, Mary, then me, then we'll go on to another speaker.

DR. BUSCH: Jeanne, most of the data that you seem to show for blood banks mostly reflected transfusion service errors. Do you capture both blood center and transfusion service errors, are those both reportable? And can you comment on the distribution?

And then secondly, can you comment on the overlap between your reporting system and what's required in terms of reporting to FDA?

DR. LINDEN: Yes. Blood centers need to report as well. And numbers-wise, the reports from blood centers far exceed the hospital transfusion services. In part, there's more things that can go wrong. They also have better system to track and identify those.

I did not include them today. They tend to be not as significant. There have been some. There have been two HIV positive units that were released because of testing interpretation errors. But on the blood collection side, the errors tend to be a little bit less significant for the patient.

I'm sorry. What's the second part of your question?

DR. BUSCH: The overlap or redundancy?

DR. LINDEN: Oh, the overlap in FDA.

There definitely is an overlap. I mean, we have worked on occasion with FDA in terms of looking at specific things. But certainly it is an area where there's certainly opportunity for future collaboration additionally.

DR. CHAMBERLAND: Jeanne, thanks.

A couple of questions. Have you ever been able to undertake a formal assessment of the level of under reporting that might exist in your system or do you have any hunches, not a formal assessment?

DR. LINDEN: We have not undertaken a formal assessment. But based on the, if you want to call it, actor surveillance that our surveyors do during the laboratory survey, the number that the facility are aware of and intentionally are not reporting seems to be very small.

What I have no idea of is the incidence that the facility is not aware of either, and trying to capture some of the ABO-compatible ones is why we did that calculation. But if the facility does not recognize it, I don't know how we can possibly get at that. But certainly, I'm open to suggestions. I agree that that's important, something important to do.

DR. CHAMBERLAND: And then maybe there--I'm--wanted to know if it would be possible for you to expand a little bit about this collaboration with this wider system of adverse events reporting in New York State. Because one thing that I've thought about is that transfusion in the, let's say the hospital setting, the administration of transfusion is, is one of many activities. And I'm very interested in attempts to work towards more comprehensive or coordinated efforts at reporting that, you know, if we, if we go off in different tangents, sort of in parallel, which may not be the most efficient or effective way to go about adverse patient reporting in general. So I was very curious about, you use the word coordinate or collaborate, how that worked?

DR. LINDEN: Well, largely to date what it has been is that the facilities are told that if they report to my office, they do not have to report to the hospital--Bureau of Hospital Services that is who operates the NYPORT System. And prior to NYPORT, there was another system.

We do share information. And on some of the more significant ones, we have definitely done joint investigations. And it's really, though, information sharing is what it has been limited to, to date. As I say, in terms of the root cause analysis, since the NYPORT System is going to be structuring that in a certain fashion geared toward meeting the joint commission's expectations, we intend to work with them on that as well so that we will have some consistency. But that's something we're really working on right now.

CHAIRMAN CAPLAN: Just one last question for you. I was curious.

Do you know, would you know, or would you guess in the adverse events and deaths, the serious adverse events following up on something Larry wanted to know, how many do you think are situations where the patient or the patient's family didn't know that there was a blood related adverse event? In other words, you knew, it was reported, but does the situation seem to be one in which the families or the patient know anyway, can't tell, wouldn't know?

DR. LINDEN: I really don't know. I don't have data on that. Certainly, many times we are told that the patient or the patient's family has been informed.

CHAIRMAN CAPLAN: Well, just a quick follow-up then. If a legislator came to you tomorrow of New York State and said, you know, I think it's time to have mandatory disclosure of all the adverse events off the data at the end of the day, the confirmed ones, you'd say what?

DR. LINDEN: You mean for the hospitals to have to disclose?

CHAIRMAN CAPLAN: Mm-hum.

DR. LINDEN: Depending on how they phrase it, my experience with legislation has been similar to what one of the previous speakers said. You don't want them to come up with the phraseology. But certainly, I am for disclosure. And I think it, it could be a good concept if applied properly.

CHAIRMAN CAPLAN: Okay. Thanks.

We've got next in our line of presentation, Deborah Greene couldn't make it here, but I think Sue Reardon did. And she is going to talk to us a little bit about management of error in transfusion services.

MS. REARDON: Okay. Debbie and I were asked to come here and talk about what transfusion services are doing to managing the errors--to manage the errors. And I will try to more rapidly go through the slides than we had previously anticipated in the interest of time, and also owing to the fact that Debbie had at least half of the slides also. So I may not be as in depth on some of the, the items that are presented.

But what we wanted to do was to start out by showing what the, the following things: The objectives of error management, error identification and documentation and how that's accomplished, the tracking and trending of the data that's gathered, and also take a look at the corrective action and efficacy of follow-up that's required when errors are identified.

So what are the objectives of error management? Well, we want to satisfy regulatory requirements. And I'll show you how many of those there are that we are under the constraints of. We want to minimize human error, which you've heard a lot of talk about here today. And we also want to affect process improvement.

So who are our regulators? Well, there's the American Association of Blood Banks. There's FAHCT, which is a--it monitors or regulates the stem cell program. Of course there's the FDA and Joint Commission on Health Care and HCFA, the College of American Pathologists, and there's also new ISO-9000 standards. I don't want to belabor the point and go through each of these in detail, but I just want to show you a trend that runs through all of them.

The AABB standards state in its section on incidents, errors, and accidents, the blood bank should have a process to ensure the capture, assessment, investigation, and monitoring of events that deviate from accepted policies and procedures.

FAHCT also have quality management requires much in the same way and says that corrective action shall be documented and reviewed by the medical director.

FDA, again, reporting errors and accidents and written SOPs for managing complaints. When we talk about FDA reportable error, it's a manufacturing error performed by a facility that affects the safety, purity, or potency of a product made available for distribution. And HCFA talks also about a process to perform root cause analysis, monitor for effectiveness.

JCAHO says that an organization has to have a planned systemic approach to performance measurement, assessment and improvement. And JCAHO has a sentinel event policy. And they talk about unexpected occurrences involving death, serious physical injury, serious psychological injury, or the risk thereof. And they call it sentinel because they say it sends a signal. And this includes errors of commission or omission that were mentioned earlier.

Now in JCAHO's sentinel event policy, some events are reportable on a voluntary basis. And those are hemolytic transfusion reactions involving administration of blood or blood products having major blood group incompatible, that's just been talked about. Root cause analysis should be required. And efforts to gain knowledge and reduce the frequency are what's, what the focus of this is.

CAP also talks about quality improvement. And then ISO, we've got international organization for standardization. And they basically talk about generic standards for quality management and quality assurance. And it's often a customer driven initiative. Again focuses on control of non-conforming products, corrective and preventive action to identify problems, determine why they occurred, apply and verify corrective action, and then the use of statistical techniques, which I'll touch on too, to determine whether or not they're needed and to apply them to the processes. So those are the, the agencies that dictate to us that we, we manage our errors.

Now, when we talk about minimizing human errors, we talk about minimizing versus eliminating. And I think you've all heard about zero tolerance. Of course the blood industry is familiar with that buzz word. But the question is, is it reasonable or feasible?

When you've got human processes of course involved, there is a tendency toward error. And I think if you look at the word zero tolerance, what it basically says is we will not tolerate or we have an implied obligation to investigate the errors and attempt to prevent future occurrences, not that errors do not occur.

Now the human element is very important. And, of course, opportunities for human intervention in a transfusion process is, is many. And we talk about patient identification in specimen procurement. And has previously been discussed, that's a huge area for possible errors in patient identification. Then you've got your manual testing and test interpretation in the transfusion service, also blood component selection, the manipulation of those components, the labeling process. Blood administration, a huge area for human intervention, and also the recognition of adverse events which, as Dr. Linden referred to, did not happen on one of the case studies that she presented.

Certainly, automation and electronic mechanisms are available in the industry to perform some of these. But what's important to remember is if these electronic mechanisms are used, they need to be adequately validated. Just because it's electronic doesn't mean it necessarily operates as we expect it to. And certainly, if it's bypassed, that can lead to error.

I next want to speak to individual performance versus process problems. And I think in the--blood transfusions basically have made evolution in the way that errors are managed over, over probably at least the last ten years. And basically what we had in the past was the blame and train approach. And that is basically you had an individual that made an error and what you need to do is talk to that person, find out what happened, and either retrain or counsel the individual. And a failure to correct their performance, another error, became a job performance issue. Certainly, those kind of errors do occur. This is not to say that they don't.

But the majority of errors that occur may be attributable to process weaknesses. And what may point to process weaknesses would be when you have multiple events and/or multiple offenders either making the same errors or certain events that are prone to error.

Another area that is inherent in showing process weakness is when a, when a standard operating procedure changes. And we all know that there is--if the proper process isn't followed for making sure that individuals are familiar with and trained on a procedure change, there can be some problems there, especially if the procedure hasn't been validated.

And point out too that constant vigilance is required. Things are constantly changing in our environment. And as I've found out, processes do mutate. The blood administration policy that you put into place three years ago from the transfusion service that you had a hand in helping develop with nursing and you reviewed three years ago, if you go out on your intra net and look on the web site for nursing procedures, you may very well find that there has been some slight changes in that, that policy that you either didn't expect or didn't approve of through the transfusion service. And so you want to make sure that you've addressed the fact that this does occur and to audit and to make sure what actually is in those processes or policies that you think you have in place out there.

Blood transfusion errors, the JCAHO Sentinel Event Alert from August of '99 have an interesting--they talked about the blood transfusion process itself has factors that are certainly recognized to increase the risk of an adverse outcome. What are those factors?

Well, the first one is variable input. Obviously, there's different blood types involved. And because of that we have different types of blood that we can give, different components, and hence opportunity for error. It's a complex, highly complex system. And you've got everything from specimen procurement, as we talked about, to the internal blood banking procedures involved and the administration and monitoring of that transfusion, including utilization review practices and procedures to make sure that, that the proper components are given.

It may be inconsistent. There's no standardization across hospitals. Or if there is some standardization, it may be even within an institution that certain wards treat things differently. I know, you know, a neonatal service may operate differently than in the OR, may operate different than in a standard medical floor.

There's very tight coupling in the processes that happen. And so it may be difficult to intervene once the proverbial ball has gotten rolling and a transfusion event is occurring, and also, tight time constraints. And studies have always shown that in areas where you have an emergent situation, such as the ER, ICUs or OR, these are the areas that are more prone to the administration errors. There may be some shortcuts in procedures, some bypassing of some of the, the parts of the process. And misidentification errors can certainly occur. And, of course, we also talked about at length that human intervention is involved.

So to effect process improvement, then, what we want to do is effect system efficiencies and that, of course, is saving time and money. We have a focus then on patient safety. We want to decrease those negative outcomes. Be that treatment delays. There's a delay in transfusion. The blood didn't get to the area it was suppose to get and, and why? Ineffective treatments, maybe the wrong component was given. Maybe the transfusion service wasn't notified that it was a recent bone marrow transplant patient and there needed to be some specialized components given. And, of course, these can all, these types of activities can lead to increased patient stays. Customer satisfaction is also important in process improvement. And remembering that our customer is not only the physician but also the patient. And that we need to keep them in mind.

So in the circle of error management, we want to capture information. We want to have reporting. We want to classify the events that occur. We want to trend. Analyze the data we have, correct any deficiencies that we have, and then ensure the efficacy of that correction.

When we're capturing the information, each facility needs to decide, should it be reported, should it be corrected and not reported, how much information gathering is too much.

In reporting the information you want to know who, what, when, where, why, what kind of follow-up was taken. And obviously you want to have sign-off or medical directory review, quality unit review of everything that is performed.

When we talk about classifying these events or errors, it's necessary for trend analysis. It can point to priorities, things that you need to focus on as opposed to other errors or variances that may not be a priority. And you want to use the best mechanism for your particular facility.

Some examples of some classification methods. There's many different ways that a transfusion service can classify their errors occurrences. One is by operating system. Another is by analytic phase. There's severity classifications, and there's many others. I just want to run quickly through a few of these to give an idea of what some transfusion services do use.

Operating system involved, transfusion services are set up in, much in the same way. We have a quality management unit. There's the preparation of blood components, blood component labeling, storage transportation and expiration, the compatibility testing, and the selection of the components, blood administration, and then the records.

So if we classified an event by compatibility testing and component selection, maybe underneath there, one of the, the ways to classify would be this is a mislabeled, unlabeled, incorrect specimen. Another one could be incorrect test interpretation. Issue and transfusion of blood, same thing. You've got a number of different areas that an error occurrence can be classified under.

By analytic phase, pre-analytic, that would be blood specimen procurement problem, maybe a product ordering problem, certainly blood collection problem, analytic testing delays or testing errors within the blood bank, and post analytic could be a reporting problem or an administration event.

Severity, the CAP error classification system classifies severity, Class A being fatal or adverse outcome and these were the events that were the most serious. Class B being just a potential for adverse outcome, though equally important to investigate. And then Class C would be minor errors, clerical errors, things that never did go outside the transfusion service.

And tracking and trending the data then, it can either be based on the number of events. So what you may have is a standard. Where if we have a certain percentage of errors that occur, you would investigate. And anything under that, that percent would not be. Or based on the severity event, every major event is investigated. Maybe some of the minor events may not be investigated unless they reach a certain threshold.

When we look at data analysis, there's your data-driven, or pure numbers types of ways to analyze the data, and then there's subjective or cause driven ways to analyze data. A run chart is a plot of the measurement against time. And I'm, I'm going to basically run through a few of these just to give you an idea of the types of tools that can be used to analyze the data. I'm by no means an expert on any of these, these processes.

But this is an example of a run chart. And this was blood samples that were collected and whether or not they were rejected with a threshold of 1 percent. And you can obviously see with the run chart that there was a variation in the data and the months collected.

A control chart also analyzes the variation in data. And it monitors the trends, the runs, the shifts, and the cycles. This is an example of a control chart.

Histogram, it's a bar graph showing the amount of variation in a process. And there is three different kinds of histograms, normal, skewed, and bimodal. And here's an example of the three different kinds. And, again, I'm not going to go into these in any detail.

Parieto diagrams can also be used, and that's your basic 80/20 rule, where you have a significant few versus a trivial many when you're looking at causes. And so you want to look to identify the most significant aspects of a problem. And so that you want to focus on the biggest problems. And those can be used to evaluate improvement of before and after.

This was an analysis of unlabeled or mislabeled requisitions. And as can be seen by the data, the RN collected were the highest or the, the significant many, I guess, meeting the 80/20 rule.

Check sheets is something most people are very familiar with. And this is just a logical point to begin to identify a problem, basic data gathering. And based purely on your observations. And you can watch data grow and a picture develop. So an example of a check sheet would be rejected blood bank samples, how many by week.

Cause and effect diagram. This your basic Fishbone or Ishikawa diagram. And it's a graphic display of all the aspects of a problem. And there's two different ways to look at these. One is by using the 6Ms, man, method materials. Another one is to look at the steps and the process. And I prefer the one with the steps and the process. If you have something, if you have your effect is low platelet yields and you want to look at possible causes. You want to look with the first step being the phlebotomy technique. Maybe it's for phlebotomy.

If you look at the, the last step in the process, you've got inadequate platelet storage as being the possible cause. So what you have is a step-wise fashion and look at each of those parts of the process.

Process mapping is another tool. And what this provides is a picture of the process and includes all the steps. And it helps to identify or simplify or streamline the process. And there's common symbols that are used. And it can be a very, very good training tool.

This is an example of a process map. And just to point out to you, this is only part of a process. It does not--it starts out kind of in the middle of the process, and there's two endpoints. But that, that is what a process map does look like and some of the common symbols that are utilized in process mapping.

A story board can also be used, and it's the display of a story. What your project description is in looking at to analyze what has happened during an occurrence and what kind of remediation is going to be affected, and then you've got your continuous cycle back-up to whether or not you, you've fixed the process.

And you can utilize data analysis tools. This is an example of a quality improvement story board that basically has used different types of tools in different areas, project organization and definition. There is an analysis of the problem, etcetera, to effect continuous improvement.

And, again, it's important to choose the right tools. Some tools are useful for problem identification. Other ones are used for problem analysis. And there are some tools that can be used for either.

So in determining corrective action, what needs to be determined is, again, should all errors be analyzed or is it only going to be the trended errors or the serious errors that are analyzed. Again, a transfusion service needs to decide, depending on their size and scope of the errors that they may see--a very large institution may see a large number of errors. And it's important to focus on the ones that the most impact can be afforded.

A smaller transfusion service may not have a lot of errors reported, and they may want to follow up on all of them. Again, that's, that's up to the institution to decide.

And sometimes further data collection can actually be a corrective action. Maybe events have occurred and it's unclear whether or not it is, it is something that occurs more often than you may realize. So you may want to go ahead and audit for a while and see whether or not that is an error or trend that needs to be analyzed.

When we talk about investigating errors then, there is symptoms versus root cause. And with that analysis we may only identify the tip of the iceberg. Again, the blame and retrain approach should be abandoned in most cases. Certainly, that has proven not to be a very effective approach for resolving process issues.

And we want to dig down to the root of the problem.

Causal tree analysis is another technique that can be used. And I'm not going to go into it at all, because in a, in a later talk I think it's going to be discussed in detail where it's been utilized. And this is an example of, of one.

Brainstorming is also a tool that can be used. And this can be very important when you get a group of people together and they try to figure out why an event has occurred. And it's important to let each person take a turn and to be very quick about the ideas that are coming out to try to let other people kind of hitch hike on another's ideas and record everything and see if you can come up with some, some root cause.

In determining efficacy, there's periodic reassessment and measurement of processes that's required. We have found that we do, we do periodic audits on different systems in the blood bank. And if we do not revisit them on a regular basis, things change. Institutions, floors changes, patient types change and, again, I talked about SOPs changing outside of our purview. So we want to make sure that the process that we think is happening out there is still the same.

This requires continued observation, continuing auditing of important parts of the process. And in that case, the end of the loop leads back to the beginning of the loop. If you discover a problem again, you need to go back and analyze it.

So a recap of the important points. Know the regulations and the accreditation standards for a transfusion service. Blame the process not the person. Concentrate on proving processes. Design a program that addresses each step. Analyze the data using a variety of tools. Seek the root cause, not the symptom. And check periodically to ensure that it works.

Closing thought, if all else fails, immortality can always be assured by spectacular error.

DR. NIGHTINGALE: Thank you very very much. That's very good.

I would point out to the advisory committee members that most, although not the last three, of Sue's slides are in your book are as Ms. O'Callaghan's. I hope we have all of Ms. O'Callaghan's. Ms. Reardon and Ms. Greene were kind enough to fax the slides at a time when they thought they might not be here. I've got almost all your slides in the book. And for the committee members, we'll get the last set to you.

Are there any questions for Ms. Reardon?

Dr. AuBuchon?

DR. AUBUCHON: Just a comment. Thank you very much for a very thorough presentation.

To point out to the non-blood banking members of the committee, this kind of approach to error detection and analysis, although not, probably not as complete as you'd have played it out, but it is intended to be present in every member of the American Association of Blood Banks, according to the current approaches to membership and inspection and accreditation of that organization. Not all of them are, are able to put together such a complete program as you have seen here laid out with all of the bells and whistles. But the concept is one that is expected to be present in all of those 2,000 members. Unfortunately, not every hospital transfusion service is a member of the American Association of Blood Banks.

DR. NIGHTINGALE: Are there any other questions for Ms. Reardon?

[No response.]

DR. NIGHTINGALE: Thank you very very much.

As Ms. O'Callaghan makes her way to the podium, it is getting toward 6:00. There is a supply of caffeine to my right. And in view of the extraordinary circumstances, if there is either a member of the audience as well as a member of the committee that would like to avail themselves of the resource, please do so. The material is very important, and we want to make sure that everybody gets it.gets it.

MS. O'CALLAGHAN: I guess it's good evening now. It should have been good afternoon.

I'm going to discuss the FDA's mandatory reporting requirements for blood establishments and transfusion services. Specifically, there's two requirements for reporting. That is, the error and accident reporting requirement and fatality reporting requirement.

We're going to start with the error and accident reporting. Under 21CFR600.14 is the regulation for reporting of errors and accidents. And it states receiver shall be notified promptly of errors or accidents in the manufacture of products that may affect the safety, purity, or potency of any product. This regulation applies to all licensed biological products, to include blood establishments. It does not apply to unlicensed blood banks or transfusion services.

In March of '91 we issued a memorandum to the blood industry that reminded the licensed establishments of the reporting requirement. It also requested the unlicensed registered blood establishments and transfusion services to voluntarily report. And I'll show you some numbers in a little bit. And it also reminded the blood collection and transfusion facilities of a requirement in 21CFR part 606 which involved the documentation and investigation of all errors and accidents.

Now, the regulation requires the reporting of errors and accidents in manufacturing. So I wanted to bring to your attention the definition that we have in the CFR for manufacture. And this means the collection, preparation, processing or compatibility testing or other procedures of any blood product that meets the definition of a drug, and including manipulation, sampling, testing, or control procedures applied to the final product.

Okay. It does not include the administration of a blood component.

Now, in September of '97 we issued a proposed rule to amend the current rule. As I said, the current regulation only applies to licensed establishments. We wanted to expand our reporting requirement to the unlicensed facilities and the transfusion services. We also had a couple of audits done by the Office of Inspector General and GAO that made the same recommendation, that we should expand our reporting requirement to include all facilities that we regulate.

So this proposal applies to all establishments, as defined in 607.3(c), which include, but is not limited to, the donor centers, blood banks, and transfusion services. It also specified a reporting time frame of as soon as possible, but not to exceed 45 days from the date the incident was discovered. It also required a report when the unit was made available for distribution.

We also thought that in addition to this proposed regulation for reporting, we needed to include some definition. One of the definitions to be included was the definition of error and accident. And it--and we defined error and accident to be a deviation from current good manufacturing practices, applicable standards, or deviations from established specifications, or an unexpected or unforeseen event that may affect the safety, purity, or potency of a product, or otherwise cause a product to be in violation of the FD&C Act or the PHS Act.

We also define made available for distribution to be a biological product that has been determined to meet all release criteria and to be suitable for distribution whether or not actual distribution has occurred. Now, this means that a unit is on the shelf, ready to go, and there's no other final checks that would prevent its release or detect an error or accident.

We received--the comment period was open for about 90 days on this proposed rule. We received approximately 100 comments that were of a variety. And I'll go through some of those quickly.

A lot of commenters said that it was unclear as to the events to be reported by transfusion services. Some commenters thought that there should be a tiered approach, according to severity, similar to like CAP's classification of severity. There should also be part of the establishment's Q/A program. And there is no real need to report to FDA. And we agreed that it should be part of their Q/A system. It should have already have been in place whether they are required to report now or not.

Some of the comments we received regarding the definitions was to change made available to distributed. There's some general confusion over the definition of error and accident. And people thought we needed clearer examples of reportable errors and accidents.

It was also unclear how the data would be used. We specified that we do trending, but it still wasn't clear to, to everybody how we were going to use all of this information.

The format was unclear. And the--people thought there was an increased burden on the establishments. And there was a lot of sympathetic people thought that this was going to be a burden on us as well. We appreciated that.

But we are in the process of finalizing this rule, and we have taken into account a lot of these comments. And in addition to the final rule, we will be issuing guidance that will be more specific to the different types of establishments, giving more examples of what's reportable and what's not. We've looked at ways to try to identify the significant events that we really want to see, thinking about the made available for distribution as opposed to distributed, you know, and where the line is drawn at where are we really going to protect the public health.

Now, in--since 1990, we've been trending this information on errors and accidents. We've had it into a data base since then. And the numbers have significantly increased. In 1990 I think we only had about 1,500 reports. Now we're looking at 15,000 a year. And, in fact, I just did the first quarter summary, and it looks like we could probably get close to 25,000 for this year.

The types of establishments that are reporting, the licensed blood banks and the plasma centers. The plasma centers are licensed so they are required to report. So they do make up--those two types of facilities make up the bulk of the reporting establishment.

The unlicensed blood banks and transfusion services, as I mentioned earlier, we issued the memo to request that they voluntarily report. You can see how well that's worked. This has been consistent, 127 for unlicensed blood banks. I think the highest we had was about 160 in a year. And transfusion services, I think this is probably the highest we've seen.

When you look at the number of unlicensed blood banks as being about 2,800. And this only represents about 45 establishments. And the transfusion services, there's about 6,000. This only represents about 15. And so then, like I said, the memo was issued in '91. These numbers have not significantly changed. There is an example of voluntary reporting.

The next couple of slides, I'm going to go through some of the examples of reports that we've received, specifically for FY '99, but they've been consistent with the distribution of reports have been consistent over the last couple of years. Most of the reports involve donor suitability issues. And I'll go through examples of these in subsequent slides. Followed by storage and distribution, labeling, product testing, component prep, collection, and then a miscellaneous group. Of the 15,000 and some reports, only about 4 and-a-half reports were considered to be not reportable.

Under donor suitability, we subdivided this into prosternation information, donor screening and donor deferral. The prosternation information reports have represented the largest percentage of reports since 1992. And this, this is information that's provided either by a donor or a third party subsequent to a donation that if that information had been known at the time of donation would have deferred the donor. The most common, commonly reported type of information is that the donor received a tattoo, ear or body piercing, accidentally ill, sick or transfusion. In fact, most of these are tattoo and ear or body piercing, where they forget that they had this stuff, donor traveled to a malarial endemic area, donor had a history of cancer, and the donor became ill after donation, not related to hepatitis or AIDS, things like mono or chicken pox, that, those type of illnesses. And this is only a subset under each category of what we've seen.

Under donor screening, donor history, if the donor gave a history which warranted deferral, was not deferred, most of these involve donors who travel to malarial endemic areas where the SOP was misinterpreted and the donor was inappropriately accepted, donor record was incomplete or incorrect for the donor temperature or hemoglobin or hematocrit, incorrect ID used in checking the deferral list and the donor was previously deferred, or the donor's referral list was not checked and the donor was previously deferred.

Under donor deferral, there's only 1 percent of the errors in this category. And this includes donor missing or incorrectly identified on the deferral list or the donor deleted from the deferral list inappropriately or re-entered inappropriately where the re-entry algorithm wasn't followed.

Nine percent of the errors and accidents were in the area of storage and distribution. This included clotted or hemolyzed units or segments being available. Failure to quarantine a unit due to several reasons, one being incorrect or incomplete or positive testing, second, testing not performed or not documented, and third, failure to quarantine a unit due to unsuitable medical history. Also under storage and distribution, product shipped or stored at the incorrect temperature.

Other errors and accidents that would be considered to be reportable under storage and distribution would include an outdated product inappropriately made available or an analogous unit, not meeting homologous criteria inappropriately made available. Also if units requiring irradiation was not irradiated or a unit issued to the wrong patient as an error in the blood bank if it--if the nursing staff transfused the wrong unit, then it would not be reportable under the error and accident system.

Improper product selected for the patient or improper ABO or Rh type selected from the patient, again, these are blood banking errors.

Labeling is 6.2 percent of the reports, this includes additional information on autologous units missing or incorrect. This is the patient's name or social security number, date of birth, product labeled with an extended expiration date, product labeled with an incorrect ABO or Rh or product type was incorrect.

Other labeling errors and accidents include missing or incorrect label or tag for the designated unit number, antigen antibody, or a CMV-positive unit labeled as negative, an irradatiated--or a unit that's not irradiated that's labeled as being radiated, an uncross-matched unit labeled as cross-matched, or the recipient number or name is incorrect or missing.

Product testing is 4.5 percent. And this is subdivided into routine testing and bio testing. Routine testing includes incorrectly tested for Rh, ABO, antigen and antibody. It also includes incorrectly tested for compatibility or an incorrect sample was tested or a sample was misidentified, and that sample was used for a cross-match, was mislabeled or collected from the incorrect patient.

Bio testing includes all of the viral markers and also syphilis and ALT. This just gives you an idea of the most prevalent viral marker test that we've seen errors and accidents in. And that's the anti-core test followed by surface antigen, anti-HIV or HIV antigen testing and incorrectly tested for anti-HCV.

Okay. Component prep representing 1.2 percent of the, of the errors and accidents. And this includes incorrect or inappropriate preparation of products, such as the SOP reduction is not followed or the platelet count or platelet yield was unacceptable, or there was an extended collection time and products were manufactured, sterility compromised, bacterial contamination, air contamination, irradiation not performed or performed inappropriately. Other, other errors and accidents under component prep would include procedures not followed for pooling platelets for cryo or for separating whole blood.

Under collection, .9 percent of the errors and accidents are in this category. Most of these were bacterial contamination followed by air contamination, collection time extended or not documented, and the collection status was not documented.

Now, we felt that it was important to capture these types of events, these errors and accidents because we feel that if, if it's required to be reported to FDA, establishments are going to take a closer look at those types of incidents. And they're going to pay a little more attention to it.

Also, we provide the quarterly and annually summary reports of our trending that we do to the industry on our website. And it's, it's a term that they can use to identify not only their own, where they fit in the system in the whole system, but to look at what other people are doing and determine whether or not they need to make some changes or adjustments to make sure that they're not going to be making the same mistakes, the errors and accidents that other people are making.

So we think it is very beneficial to have this type of information. We know and we're working on stream lining and trying to make sure that we get all of the significant things that we really need. And we, again, we also want to expand this reporting requirement to include all of the facilities that we, that we regulate.

Now I'm going to talk a little bit about fatality reporting. The fatality reporting requirement is at 21CFR 606.170(b), under adverse reaction files. And it states that when a complication of a blood collection or transfusion is confirmed to be fatal, CBER shall be notified by telephone or telegraph--you can see how old this one is--as soon as possible, or a written report of the investigation shall be submitted to CBER within seven days after the fatality by the collection facility, in the event of a donor reaction, or by the facility that performed the compatibility testing in the event of a transfusion reaction.

I'm going to break this down into, into smaller parts here so that you can understand what it is that's required to be reported. A complication of a transfusion or collection is identified as a death not thought to be due to the patient's primary disease condition. And it's apparently related to the blood transfusion or, and/or to the collection. Which means that there's some evidence that there's a relationship between transfusion and the fatality or the transfusion or collection cannot be ruled out.

Where there is--the patient didn't die, the underlying cause, he had a transfusion, they're not quite sure. We expect those type of things to be reported at least initially.

Now, who is responsible for reporting? If it's a donor death, the blood collection facility is responsible for reporting. If it's a transfusion recipient, then the facility that performed the compatibility testing is required to report. And this regulation applies to registered and unregistered facilities. So it does encompass the transfusion services and the unlicensed blood banks.

The initial report is, is to be made as soon as possible. It can be made by either telephone or fax or e-mail. And a written report is suppose to be followed by the initial report within seven days after the fatality. And this document is the initial report, and it may provide more specific details to the initial report.

There's a lot of cases that we get where, you know, initially within the first 24 hours they don't know if the fatality was a result of the transfusion or if it was a contributing factor. So we get that reported initially. But when they do the investigation, it's determined, as in some of Jeanne's cases, that, that the fatality didn't result from a complication with the transfusion.

The number of fatality reports that we've received since 1996, we had an increase in '98. And we're not quite sure exactly why, but then they seem to have been coming down in '99 a little bit.

The types of transfusion related causes that have been reported for '99 have been hemolytic transfusion reactions, such as the ABO incompatibility, bacterial contamination, transfusion related, acute lung injury, or trialing, graph versus host disease, transfusion transmittal disease, anaphylaxis hypovolemia and there were three donor fatalities in '99.

I wanted to point out the hemolytic transfusion reactions, because I think in Dr. Linden's presentation, she kind of emphasized this as well, in 96 you can see that there were more floor errors that resulted in a hemolytic transfusion. Meaning that the floor errors capture the event that nursing staff or somebody outside of the blood bank, the OR, the ICU transfused the wrong patient with the, with the unit, or transferred the wrong unit to a patient.

As we go down the next three years, they're pretty much equal. So we've seen just as many reports--fatality reports as a result of blood bank error as we have for nursing staff or outside the blood bank errors.

Now, what's not reportable to FDA is errors and accidents in manufacturing where there are no products available for distribution, meaning that the product was destroyed during manufacture so it never got to the point of being put on the shelf, unavailable, or the error and accident was detected prior to final release. It had already been put on the shelf, but there was a final release procedure that prevented that actual distribution of that product.

It's also not reportable if the safety, purity, or potency of the product is not affected. For example, a unit labeled with a shortened expiration date, the product quality is not affected by that provided the unit was transfused within, within the appropriate expiration.

Discrepancy between shipment and shipping document, where there is like 10 units of blood cells in the box, but only 9 are listed on the shipping document. Donor donated within 56 days of the last donation, and the donor didn't meet the criteria for age, blood pressure, or pulse. These are donor safety issues and don't necessarily affect the, the safety of the product.

Also what's not reportable to FDA under the fatality reporting is if there is an error and accident in administration. Meaning that there was a recipient reaction that was not caused by an error and accident in manufacturing. The unit was transfused to the incorrect patient, again, outside of the blood bank, incorrect unit or product transfused, unit not transfused within the appropriate time frame, unit not transfused at the appropriate temperature, or the unit was transfused but was not held at appropriate temperature after leaving the blood bank. All of these are administration errors that, unless the patient died, it would not be reportable under fatalities, and because it's not an error and accident under manufacturing, it would not be reported under the error and accident reporting.

I've identified the, the--some situations that would not be considered reportable. But I did wanted to make the point that FDA doesn't ignore those. When we do our inspections, our routine inspections, we do look at the, at all errors and accidents, all the adverse reactions, to make sure that appropriate corrective action is applied, appropriate investigation is done, to make sure that they reported what they're suppose to report. So on those requirements and the GMPs for our establishments to document and investigate, apply corrective actions, for all of these whether it's reportable or not. So, so it's not like we're, we're ignoring things that are not reportable. We still look at them, just not at a, at a--at the level of the reportable incidents.

With the fatalities, they are followed up almost immediately. When we get a report of a fatality, either a FDA investigator will go in to an establishment to assure that the investigation was done appropriately and corrective action was applied to prevent its further--prevent it occurring again, or HCFA or one of the DH status organizations will go in, if it's a transfusion service. Or if it's an incident that occurs outside of the blood bank, then HCFA will follow-up and report back to us what they found.

So with that, are there any questions?

DR. NIGHTINGALE: Are there, indeed?

MS. O'CALLAGHAN: I did such a good job or--

DR. NIGHTINGALE: I would, I would second that, Sharon. We thank you, and the speakers up to this point, for having done a great deal to compress an eight hour operation into a six hour operation. It is now 6:20 p.m. It is our hope that we will have completed the day's on or about 7:00 o'clock. I think Dr. Battles and Dr. Kaplan both have presentations.

But Dr. AuBuchon, did you have a question right now?

DR. AUBUCHON: Well, I had a question, and I don't know whether possibly Jay should answer this or someone else from the agency. But, clearly, up to this point the agency has felt, as demonstrated in this very nice exposition, that those errors which occur after a unit has left the control of the transfusion service do not get reported to the FDA. But clearly from Jeanne Linden's data and other data, those may be the errors which occur most frequently or have the greatest potential to cause harm.

Is there some regulatory formula by which that could be brought into the same reporting process so that while you are trying to compile information that may be useful to the industry, you really get a complete picture of where all of the near misses are occurring and not just the fatalities that occur outside the control of the blood bank?

DR. EPSTEIN: Well, let me venture an answer. And Steve Maciello, who heads our Office of Compliance and Biologic Quality, may like to add something. But I think that the separation in the reporting systems that you've heard reflects the philosophy that FDA is focused on product manufacture, that is manufactured products and their related claims; whereas, the administration issues we're really seeing as medical use issues, which is not an historic domain of FDA engagement.

But your question whether these could be integrated, I guess is open-ended. But we would have to review what our basis of regulatory authority is to do that. So the answer in a nutshell is we don't regulate practice of medicine.

DR. NIGHTINGALE: Mr. Maciello, I don't see you. Would you like to add anything?

MR. MACIELLO: No.

MS. O'CALLAGHAN: I think it could be possible that we could have, if we had some type of voluntary system to report everything else that's not reportable to FDA, but we could share information and get a, a bigger picture. But as Dr. Epstein stated, our mission is, is in manufacturing for products. So it's not the administration. So having that reported to FDA would significantly burden us, for one, and it's not really our focus.

DR. NIGHTINGALE: Mr. Allen?

MR. ALLEN: Based on the numbers you gave us, the reports you get from the licensed blood banks versus the reports you get from the unlicensed blood banks, what are the percentages you think you should be seeing back from the unlicensed blood banks?

MS. O'CALLAGHAN: Well, I don't know if I can give you a good estimate. I believe that it should be definitely higher than we're looking at. With the licensed establishments, there's only a couple hundred, I think like 400 licensed establishments as opposed to 2,800. With the types of incidents that are reported by the licensed facility, I find it hard to believe that the unlicensed places are not making similar mistakes.

MR. ALLEN: And a couple other things, would you prefer, then, if everyone was required to give you these reports versus these certain entities being voluntarily doing this?

MS. O'CALLAGHAN: Yeah. Yes.

MR. ALLEN: Would that make the process--

MS. O'CALLAGHAN: Yes. In a way it's--and like I said, that's the direction that we're headed.

MR. ALLEN: Okay.

MS. O'CALLAGHAN: With our proposal to include the unlicensed facilities and the, and the transfusion services.

MR. ALLEN: Okay. The last thing is what's the FDA's policy regarding patient notification on, on these reactions?

MS. O'CALLAGHAN: Again, we don't get into that area of, of patient interaction because our focus is on the product. We make sure that the investigation is done and corrective action is applied to prevent any kind of further events. But as far as patient notification, we don't--

MR. ALLEN: Is there any agency that you're aware of that, that handles that part of it?

MS. O'CALLAGHAN; It's very possible that health care finance administration may deal with some of that. I know, I mean they deal with the hospital issues. I don't know how far they go with, with patient notification in different scenarios.

MR. ALLEN: Okay. Thank you.

DR. NIGHTINGALE: Are there any other questions for--

DR. GUERRA: I just wanted to ask one quick question.

Does the FDA monitor or regulate the, the actual equipment like the collection bags or the bags for administration, the infusion pumps, and the equipment that sometimes obviously, has an employment role in some of the transfusion related incidents?

MS. O'CALLAGHAN: Yes, we do. We regulate those under our device requirements, our regulations for devices. And when events happen related to those devices, in some cases they would be reportable under the medical device reporting mechanism.

DR. NIGHTINGALE: Are there any other questions or any final comments? Sharon, would you like to make any final comments?

MS. O'CALLAGHAN: I think that's it.

DR. NIGHTINGALE: Well, I--we all thank you very much.

Our next speaker is Dr. Jim Battles, from the University of Texas Southwestern Medical School. Jim, the microphone is yours.

DR. BATTLES: Thank you.

I'm going to talk about construction of a Medical Event Reporting System for Transfusion Medicine, MERS, how we came up and developed the design of it.

What motivated the development of MERS was the fact that there is a lack of a standard method for event reporting in transfusion medicine even though there are locally developed systems and very fine systems that have been put in place to comply with FDA and other AABB and other accreditation--

One of the difficulties of this without any lack of standard, it's difficult to share data across these various systems. So the project that we undertook was to create a prototype system, and we've received funding from NHLBI to do that. Again, the goal of the project is to develop a prototype system for a national standard methodology for event reporting. Like any other event reporting, it's to gain knowledge about transfusion systems, both at the local level blood centers and transfusion services. And obviously, like anything else, the ultimate goal is the prevention of errors and adverse outcomes, which will improve the safety of blood transfusion.

One of the things that we had a little bit of difficulty is that there isn't very much in the body of literature on how do you design such a system within medicine. So there was little guidance that we had within our field, but there's a lot of information about reporting systems from outside of medicine from a number of industries. And so we wanted to take an interdisciplinary approach to the design of the systems, to take advantage of lessons learned from other high-hazard industries from nuclear power, from aviation, some of the other things we heard about earlier. And to do this, we also said, since we don't know where we are, why retrofit something. Let's look at an ideal system.

And under the ideal system you say, we've got all of the resources in the world that we can use, what are the characteristics of an ideal reporting system? And then you kind of work backward to seeing what can be done and what's technologically workable and come with your recommended system. The theory is that you get a better system that way than starting from what you have and move forward by conceiving it that way.

In order to undertake the concept of the ideal system, we used a Delphi approach, which we gathered together a panel of experts, both from transfusion medicine, medicine in general and leaders in human error, human factors, from aviation, nuclear power, the people who developed the Aviation Safety Reporting System. You've heard several references to Jim Reason, he was on the panel, Lucien Leap [ph.], others. We really felt we had a blue-ribbon panel to conceive of what such a system would be like.

We then took the results of the Delphi and said, okay, now we've got to build this in 18 months and get it up and operating, the prototype test, what are the constraints that we would have? So our Steering Committee took the information from the Delphi and developed a model that could be implemented within 18 months and meet FDA and AABB requirements.

We wanted to be sure that we had a broad spectrum of the stakeholders in transfusion involved. And, fortunately, many of the people who were involved in various stages of this project, some of them are in this room. We had representatives from the Food and Drug Administration--you just heard from our FDA representative--AABB, ABC, American Blood Resources, American Red Cross and Blood Systems, Incorporated, plus local blood centers and hospitals.

Well, what is it that one would say are the features of an ideal reporting system as we came up with it with our data? Well, not surprisingly, you see the confidential no fault. And, again, no fault implies that the person who gave you the information should not have any adverse events occurring to them if they give you information; in other words, don't shoot the messenger.

It also, if it's going to be a workable system in our environment, it had to be integrated with the current quality assurance systems that were already in place and required by regulation and accreditation, and it should include all events, including near misses. And events should be classified, not surprisingly, by what type of event and where did they occur in the transfusion process from collection to delivery. And we needed to look at underlying root causes of the event.

I want to emphasize one of the things that we will stress is that every event has a number of causes--so those are multiple, that's plural by intention--and that one of the things that should happen, particularly for the voluntary type systems, there is some benefit to have events reported to a nonregulatory body. Essentially, don't report to the cops. That's a kind of a common theme that comes up. So that there needs to be an arm's length. The data can be shared, but there's a somewhat disquieting effect, particularly for the benign and near-miss data, when it gets reported to a regulator.

What are the features of MERS, some other things? One is that we want to concentrate on the near-miss and benign events with potential for harm. They may or may not be reportable to the regulator. They may not fit Sharon's definitions. The system should be complementary to what's required. And most of all, it's got to be a useful management tool for blood centers and hospitals. To be effective, it's got to be a useful tool rather than just simply an exercise in reporting.

And if you compare some of the lessons learned from aviation that were in Dr. Reason's most recent book, you can see there's quite a convergence. They talk about indemnity, confidentiality, separation from the regulator, and there's got to be feedback. The system has got to provide useful information to the people who use it so that they get motivated to continue to reporting. If they don't see any feedback, the motivation for reporting is--and second of all, you have to something that's easy to use. It shouldn't be so complicated that no one can do it.

We also took a look at the iceberg model of accidents and errors that was first developed in the 1930s, when we're talking about the "tip of the iceberg." The sense here is that major accidents, those sentinel events are just the tip of the iceberg. And underlying that are benign events that no harm was done, but the potential for harm can be present.

Now, I should say that the emphasis there is on a benign event and a misadventure, you know, that's very visible. There's death, severe harm, it actually happened, you know "60 Minutes" is on your doorstep. A benign event, the difference there is the event actually happened, but the difference here is there is potential for harm may be present. And it's there by the grace of God or the robust nature of human physiology that this event did not turn into a misadventure. But the benign event actually happened. And, also, benign event may be very benign, and it had no potential for harm.

A near miss, unlike a benign event, is that the unwanted consequences were prevented because somebody stopped the event from occurring. There was an actual prevention by someone, and in some cases could be a planned barrier, but in most cases, it's the sharp-eyed tech who recognizes something is going to happen or the nurse, getting ready to hang a unit, says, "This is the wrong unit," and stops the event. That's a near miss. And the importance there is the emphasis on recovery. Because oftentimes we don't look at why did somebody find this? Why did it recover? We may, in fact, by not looking at recovery, we could make changes to rip out what our invisible recovery mechanisms are. So it's important to keep track of not only what it is, but why it's recovered.

The systems in a reporting system is detection, selection of events for complete description, computation and interpretation. One of the important things about detection is that it is the first step in error management. And from a safety management point of view, you want your detection rate to be high. Because, remember, these are information, and so we want to increase error detection. And we've coined a term for that called "detection sensitivity level." So an organization who is the more generative-type organization has created a climate where it encourages that reporting. It's high. An organization, the more pathologic organization, is going to discourage information, and the DSL will be low.

Along with that, we have to put in context, in terms of the events, some scope of severity because, really, severity is the level of harm or potential for harm, and that has imbedded within the risk aspect. And like the CAP classification, a simple severity Level 1, fatal or serious outcome, injury, minor, transient and, three, no ill effects.

Again, you have to apply, on the severity index, the potential. All misadventures are severity Level 1, but any near miss could be a Severity 1, 2 or 3--the same with benign event.

So what we're trying to do in sort of a management goal is you want to drive up your detection sensitivity level. And what we're trying to manage for is to reduce risk, and that's the overall event severity. So, if we're successful, we should have these polar concepts. And this also shows up, this increase in reporting, decrease in severity, from data in Jim Reason's most recent book talking about an aviation safety reporting system in Great Britain. So this is the kind of a goal that we're trying to achieve.

However, if we're successful at increasing the DSL, you can quickly get overwhelmed by reports. So we need to be able to have a system that is capable of processing an increased number of reports because data that gets reported in a format that can't be used for analysis without ease of use is worthless. It just clutters up desks, and offices and filing cabinets. So we needed a process of routine investigation and expanded investigation.

The process, like most of what we've talked about, is constantly, when you investigate, asking why. And, again, it's the usual kind of things--who, what, when and where. And we keep asking why, in the investigation process, like Toyota has recommended, you ask why five times. You keep finding out where that was, what are the antecedent blocks, what was the event, what were the things that led up to the event. And it's important to record not only the consequent event, but the antecedent event.

In the investigation process, we use the tool of a causal tree, which allows us to do a graphic display of the investigation, working from the top event. It has two sides. It has both what went wrong and then, hopefully, because we're collecting lots of near misses, we'll get some indication of the recovery steps. And this is an example of an event. And so it's critical that we capture it.

And if we did the five whys, we should have three to five levels in a causal tree. Again, multiple root causes. Some guidance we've been given from our Dutch colleagues, it should be three to seven causes. If you only have one or two, you probably haven't asked why enough. You haven't dug deep enough, and you can see there's a recovery side.

We also have to deal with the problem, if you're dealing with larger numbers, is that you can get overwhelmed by large amounts of text. So we wanted to have a system that would combine the best of both worlds in terms of a standard coding system, as well as narrative amplification. And that comes out of the descriptions that you have from your causal tree. We also wanted to be sure that we could get the information into the computer. So we had initial concentrations on the use of smart paper. Many of you have encountered that when you took tests and other things.

In causal analysis, we were looking at a way to combine, you heard earlier about Jim Reason's category of active and latent error. This was important that we look at the active errors committed by humans, as well as the underlying latent consequences.

There's another way of looking particularly at the active errors coming from the model of Ron's Rasmussen that looked at what was--we have human failures, but what's the action of the human failure, and those are three categories of skill base, which is often the slip--rule-based and knowledge-based behavior. So there's three levels. It's a cognitive model that explains essentially the cause of the problem.

And, of course, the reason we want to capture both the latent and active errors, as they were, because it's when these things come together that we get the misadventure. It's the chain of reactions of what the ultimate--and if we're looking at near misses and benign events, hopefully it would have led to an equivalent activity.

We found a very useful classification system that had been developed by the Dutch, initially for the Petro Chemical Company. Cherk [ph.] Von der Schoff in Eindhoven University of Technology developed it. It has essentially 20 codes divided into four categories: Technical, which are the things, equipment, labels on blood bags, organizational factors, staffing levels, budgets, procedures, protocols, organizational culture; and human factors, again, using the skill-based/rule-based; and then there's always some other factors. And we have certain aspects that could be caused patient related. This has to do with either patients not providing us the correct information or human physiology that is unexplainable. And then there's always plenty of causes sometimes that we really don't quite know.

In each of the categories, there is also an external. I'll just kind of go through these quickly in the interest of time. Now, Hal is going to tell us how we took all of this marvelous stuff and implemented it into the system and some of the results. It's your turn, Hal.

DR. NIGHTINGALE: Dr. Kaplan?

DR. BATTLES: We will combine ours and then get questions later, but in the interest of time.

DR. KAPLAN: Bear with me. I realize full well that anything beyond thank you and goodnight is excessive.

[Laughter.]

DR. KAPLAN: I'm going to skip some of the background. We are implementing this in various stages with various hospitals and blood centers, and we're into a training phase right now. Going to a case example, which shows some of the application of things that Jim spoke about.

A medical technologist on the second shift was releasing blood units from quarantine into inventory and noticed an out-of-sequence transfer label number on the unit of red cells, took action, pulled the unit out, and the unit was isolated. This is the front and back of that unit, and you can see where the arrow is pointing. All of these transfer labels, you probably can't see the numbers, but the last four or the last two digits are 4444, except in the bottom number where it goes to 45. And there's a little bracket there you may not see as clearly. But these transfer labels are rather important because they are put on the tubes in which we do the infectious disease screening and every other test that we do on these units of blood. They identify those units with the tests, a rather critical simple step.

Now, there's a failure because the labels for each bag are to be separated by tearing at the marked brackets. And you can see the marked brackets because I put an arrow there. This is a causal analysis of that event. Unit of red cells almost released with an out-of-sequence transfer label number. Now, we have a recovery side here, and as Jim stressed, when you have a near miss, it's very important to capture the recovery. Because, by definition, near misses don't go on to a bad outcome, and we often don't have that data when we're making decisions about changing resources or changing a process.

On the failure side, we had an inadequate SOP for checking the label before it even got into quarantine. But the real issue was the phlebotomist tore the label in the wrong place. And you notice we put down for our coding system, "HSS," which was a slip, a human-factor slip. Clearly, nobody needed to be retrained on how to tear the label. Tore it in the wrong place. But the reason we put it in yellow was because we really didn't think that was relevant, in terms of preventing it from happening again. What was more important was the label had poor markings, which was technical design problem, and the label provided very poor feedback.

So the two things that you want in the system is to prevent error, and if an error occurs, to capture it early and try to mitigate its effects. And this label fails in both of those. And, again, this is an organizational issue because it was my choice of the label not to tack on the line. So there are a number of possible corrections, but what we did is we put a fat black line across, again, making it easier to see where to tear, rather obvious. But also if it was torn in the wrong place, there was feedback so that early on somebody could spot their own error. This happened to be the input of that data.

The event reports that we look at, as has been emphasized, is the first step is detection, getting the input. A very important thing is, if you're successful, as Jim said, you can get a lot of input, and as also was discussed earlier, the importance of having some kind of filter, deciding what gets investigated at what level and what requires being put in the database with some stopping rules or some process control rules, and what actually gets a causal analysis. And then, finally, when you get this, these events--and we'll talk a little bit about that--drive us to make changes in the system. And as has already been said, perhaps monitoring is the most appropriate change.

At Parkland Hospital, when we had an orientation about the fact that we were serious about system improvement, that we were not interested in the blame-and-train approach, that, and we have to prove that, but we had an orientation at least saying what we meant, and that people telling us about what they knew was a failure in the system that we didn't know about, we would welcome, and there'd be no harm done to the reporter.

And we had a tenfold increase in event reports. The bulk of these was not somebody coming and saying, "I did something, and I'm telling you about it." Yes, there were some of those. But most often it was things that, in the course of our double checking, things were picked up, people just corrected them and went on. So we never knew that these things occurred, but everybody was loathe to report on someone else. And by removing that punitive aspect to reporting, we had this tenfold increase that's been sustained now for several years.

But the good news is the numbers went up. The bad news is there's real pressures for corrective action; the medical culture intervention and action, the regulatory environment saying you need a corrective action, and where's your documentation of what you've done and then the litigious environment, all of the above. So there's a lot of pressure for every one of these single events to diddle with the system.

And that's been called by Demming tampering, the potential for tampering. And as also has been previously discussed, we have abounded rationality of this complex, closely linked system. And sometimes when we fix a target risk, we end up with an undesired contravening risk.

An example of the target risk, was the issue about mad cow disease and the decrease in beef consumption and the concomitant increase in poultry consumption, with a contravening risk of an increase in salmonella deaths. Very hard to predict what the contravening risks might be in complex situations. So that monitoring is often a good thing rather than taking action beyond monitoring. Monitoring should be considered an appropriate action.

And how do you make that decision? Well, we developed an action decision table for making those decisions or at least as a decision aid following a single event, and no new style. We just took severity, probability of recurrence and deductibility, the typical hazard measures, and put it into a decision table, and those were the conditions; the severity, probability of recurrence and detectability. And then the actions would be proposed change, consider change, monitor and external report.

Now, the proposed change is there rather than make a change--again, with respect for the contravening risk, proposing a change and having a very cautious review of the implications of that change. Consider change might be that we would look into doing a focused audit, get more information. This is just up to, at this hour, to strain your eyes, but this was just to show that there was a decision table, and you get driven down to the lower levels on the decision table to what action to take, whether it's proposed change, consider, et cetera. And it gives you, we use this as a written record of why we made our decision when we sign off.

Now, of the total events we looked at, it's not a surprise, given Jeanne Linden's data, fortunately, what we're emphasizing is a potential for harm and events, not actual harm. And Level 1 is our most severe potential; Level 3, the least. And this was the breakout of these events, these 384 events, Levels 1, 2 and 3. Level 1 being the orange, Level 2 being the red, and green, Level 3. Two-thirds being Level 3.

And monitoring, using that decision table, monitoring is the only action, essentially, in Levels 2 and 3. But if we take a look at Level 1, applying the decision table, the 72 events, again, monitoring was the predominant appropriate step. And that minimizes making changes to the system and getting it to the point that the effect of tampering throwing it into a nonsteady state.

This is the total number of events. And, again, you are seeing monitoring, obviously. I think that's a very important point. Monitoring isn't you toss into a database and forget it. You set controls in, and when things exceed, when events exceed the control limits, then there's a change and at that point warrants investigation.

Now, let's look at some Severity 1 consequent events. And this just reflects the linear process from left to right down to--ranging from sample handling and up through unit administration. And sample testing is one of the smokestacks that's most impressive here. And if we look at the antecedent events of this consequent event, where we picked up the error at the point of sample testing, and looking at the sample test events, we saw we had errors in the sample testing itself, but we had an interesting sample collection, no surprise.

And we found that we had a lot in our obstetrical unit, which is a very large unit with a about somewhere between 13- to 14,000 deliveries a year, and we just assumed, in that busy unit, there was a lot of expeditious, but not very correct procedures for getting samples drawn and labeled. One of our, a fellow looked into this. John Armitage looked into it. And we found, with a focused audit--a focused audit is--a point to be made here, event reporting is only one of a lot of things you can do to find out how your system is working, and it doesn't give you solutions, but it points you at the problems to find something more about.

We had 143 patients in which the computer reports showed an apparent blood typing discrepancy. And when John reviewed the charts of these patients, they found 22--he found 22 with multiple ABO discrepancies. These were the charts that were reviewed in-depth. This was over an 18-month period that these 143 patients were spotted. Out of those charts reviewed, 11 apparently had an identity switch on more than one occasion. It wasn't one change in the blood type, it was several changes in the blood type. And so apparently people were swapping clinic cards. There's also a subset in which, because of similar names, people were being assigned the wrong medical record number. But there was clear evidence of identity switch, which is obviously going to have a different solution, whether it's a different ID card with a picture in it. But clearly admonishing the staff to follow the procedures for labeling won't quite do it, and some of the high-tech approaches might not as well.

Event reporting has the ability or the product of being able to monitor events by type, determining the weak points in the system and also monitoring events by cause, which guides the choice of corrective action. A technical design issue calls for a different kind, obviously, of approach to correction than a human factors based lack of knowledge or an organizational transfer of knowledge not taking place; Jeanne Linden's example of organizational transfer of knowledge, when there hadn't been a proper shakedown or education of the staff when there was a handoff of the blood salvage equipment, and that was a lack then of training, and it was an organizational failure.

But we can look at event types, the distribution of event types; again, sample testing being one of the peaks or distribution of event causes, as I said. And we found that we had a technical design and HRV or human verification step that wasn't being carried out. But these were correlated, those two peaks were correlated, because the design of our computer system requires a redundant verification step that does nothing, but is required. And, therefore, if it doesn't get done, it gets captured as an error. We can't change our current system, but you can bet in the design specs for the next system, that will be attended to.

Now, when we talk about standardization or being able to compare information from one organization to another, this is just somewhat fortuitous, but I think it's more than that. We have distribution of causes in transfusion and in the Dutch Petro Chemical processing plant. Now, we're using the Eindhoven classification scheme. We're talking the same language. And you can see a striking overlay of human, organizational and technical distributions. And it gets to be much more impressive when you consider it across such apparently different disciplines. So I think that just underscores the usefulness of a standard way of comparing things.

When we talk about near miss versus actual benign events, as we talked about in the ones that get captured and the ones that get through the system, about a 50/50 distribution in our data at the Severity Level 1, you see the high number of 45 percent is organizational, equivalent to the human factors.

Down where we dropped down to Severity Level 3, the least, the benign events with no apparent potential for harm, the organizational factors drop way down. But at Severity Levels 1 and 2, particularly at Severity Level 1, where we're most interested in the causes, organizational issues play a significant role.

Now, one question we want to ask the data, are there differences between the causes of actual events that go through the system and near-miss events, those that we trap? And there weren't any differences in the underlying causes at Severity Levels 1 and 2. There was not a similar pattern at Level 3, but there were no differences at Severity Levels 1 and 2, which underscores the usefulness of studying these events rather than being at the peak.

So the anticipated outcomes of an event reporting system would be the detection sensitivity level goes up and stays up; over time the event severity level goes down; one way of measuring are you doing something that is effective for safety, identify where the process errors occur, the black spots, if you will, of the system; identify the system-critical failure points; identifying the common causal factors; and, again, by the similar plotting of these things, monitor the effectiveness of changes in the system; encouraging event detection through confidential no-fault reporting; reporting benign and near-miss events; identifying the system weak points before adverse events occur; look for root causes and fix the things that set up humans for failure.

Now, reporting and analysis, I think one of the themes that came out of today is you want to maintain a balance between reporting and analysis. There's a big thrust about who do you report to and what do you report. I think there's a lot of internal emphasis on internal reporting, and the quality of the analysis rather than pushing the report out and finishing that copy book for the regulatory body, and I think that's important in that we have to maintain the right kind of balance.

I think, also, if we make it easier for people to report in a standard way for the mandatory reporting, then if that becomes standardized, then we have the option then of whatever other system that complements it, we'll have the same standard language. Since they're already doing it, there's an encouragement.

One of the problems is people worry about causal analysis not being absolutely what happened. And quoting Francis Bacon, truth arises more readily from error than from confusion. But just making a stab at it is a step in the right direction.

And all of us are doing a lot with event reports. It depends on what we're doing with them, and this is the quote from Moliere, "Good heavens, for more than 40 years, I've been speaking prose without knowing it."

And we would like to propose that MERS TM becomes a model for near-miss reporting, at least as a starting point, because it will be helpful if FDA and all of the other players are talking the same language. It encourages people to move forward knowing that their, I don't know what kind of imprimatur is required, but at least they're not going to move forward, expend a lot of resources and then find out they're doing it in a way that isn't "right" or prudent.

And one would hope we can create a similar relationship between the regulator and reporting agency for benign and near-miss events, as exists in aviation.

DR. NIGHTINGALE: Dr. Kaplan, Dr. Battles, thank you very much.

Are there questions from the panel? I anticipate that there will be.

Dr. Chamberland?

DR. CHAMBERLAND: Thanks. That was a very interesting presentation. I wanted to ask you just a couple of questions that were--I wanted to make sure that I had a clear idea from your paper.

In the paper that we received, you talk about source of reports, and your feeling that if events reported to the system, if you relied solely upon self-reports, that there would probably be a fair amount of underreporting or you call a low sensitivity. So it sounded to me like there is a fair amount of an active, well, event finding, if you will, in the system, and I wanted to get a clarification about that.

You mentioned using existing sources like QA audits, and supervisory reports and whatever. So I wanted to get some sense of just how active a system it is because it sounds like it's a fairly active--

DR. KAPLAN: Well, the system grows in activity. And one of the key points that we didn't clarify is that it becomes a more active system if people take ownership of it. Just the event reporting alone, in the environment, as Jim pointed out, with feedback, with acknowledgement of the reports and feedback about change and a nonpunitive environment, you start having an effect that goes beyond merely the event reporting. The Dutch have found that it creates a safety culture. It becomes part of how people operate.

So, in a sense, this passive system really becomes an active one. And the phrase, I guess used by Mr. Francis, "not dropping the egg," if it's not used in a punitive way and people feel they own the system, event reporting is an active part. But we do other things. We do audit, we have supervisory review, we do focused audits, depending on even a trigger that comes up that suggests there's a problem. So there are other ways of getting information.

DR. BATTLES: Also, depending upon the organization that implements like a MERS approach, they may already have a fairly active reporting system. In one organization, they had a number of reports that stayed within the department or operational unit and were not forwarded to QA. The more severe things that were going to go to the FDA or others that stayed within a department stayed in the department. Well, that wasn't--it wasn't sharable information. So the minute you started including those kinds of reports in the local system, obviously your numbers of events went way up. So you could go from, you know, 200 reports a month to 1,000 reports a month. Well, was that 1,000 reports, 1,000 events, an increase in error? No. It was just information that was already there.

So you want to use multiple sources of the information.

DR. CHAMBERLAND: Okay. Right. Just one follow-up. I'm trying to get a handle on the resources that were needed to implement this in the individual hospitals, and in the article, you talk about designating perhaps an existing member of the QA staff who was the designated point person, I guess, for review or completion of these forms. Would that--did that require complete dedication to this project or, in addition, was able to handle other responsibilities? And who--are these computer-generated? Who draws these nice causal trees? I mean, it seems to me that would take a bit of time to sort of read through a report, get information and map that out. It looks very nice, but I'm assuming it takes a fair amount of time to get to that end product?

DR. KAPLAN: That's why you need that filter and decide what you're going to do, a causal analysis--

DR. CHAMBERLAND: Right, because you can't do it for everything.

DR. KAPLAN: And I think that becomes a very important decision point.

DR. BATTLES: The personnel--and we have intentionally--we're going to be a little vague because the intention is to integrate it with the existing quality assurance. So depending on the organizational size and what efforts they devote to QA, those--if they are, you know, accredited by AABB, they should have a system in place. So it's those personnel.

Now, we took the term--you'll see in the articles "the QA system operator." That was stressed by the aviation safety reporting system. You need a system operator. Well, the system operator for MERS is within the QA framework.

Now, obviously, it takes some time for a QA SYSOP to learn how to be what we call a good forester, to do tree creation. But once they get some familiarity with it, they can tree pretty quickly. Depending on the information they have and the level of the investigation they've done, they can do a tree in, you know, 15 minutes or less.

So part of it is a mind-set. You know, once you begin to think about the process, asking the why questions, a lot of things open up. And suddenly your personnel doing this, they get this, what we're discovering is the "Aha." If they start asking the why questions enough, they suddenly begin to find these latent problems in the system that were setting up the humans for--so it gets expanded. Therefore, that drives, therefore, a logical corrective action flowing from what the true action is rather than, oh, well, we counseled and retrained, which you've heard many people are saying is rather ineffective use of resources. So it's intended to use the resources of the QA components.

Now, if an institution is more pathological on Ron Westrum's scale and doesn't want to know any information, they will probably not want to put any kind of resources--and they don't want to hear. So for them, oh, this becomes this unreasonable burden. But they should have been doing it all along.

DR. KAPLAN: One of the questions you raised earlier in another talk was the issue about generalizability. The classification scheme itself is fundamentally within the cognitive science literature and was developed--was used originally in its original form in the petrochemical industry, as you can see, and nurses and hospitals don't want a system for transfusion errors, medication errors, patient falls. And so the generalizability of this kind of causal analysis, the only difference would be the process points would be different in transfusion than in medication. But there are a lot of similarities.

DR. CHAMBERLAND: That was one of my other questions. Are you working toward sort of amplifying and diversifying this? Because you could see that once you have this infrastructure in place, its adaptability to something like medication errors or interoperative errors--

DR. KAPLAN: It's been very--the University Hospital at UT Southwestern, they're looking at premature extubation in their surgical patients, just to look at this as an approach, and it was really very well received as a way of understanding what were the issues.

And we're working to implement this system at our large teaching--Parkland Hospital. And several of the other hospitals who have adopted this system, felt the entire hospital is looking--oh, this is--they want to move in that direction.

So it's been very enlightening. As the transfusion service moves, they're seemingly to be dragging the entire hospital along with them with some enthusiasm, which is kind of encouraging.

DR. KAPLAN: One of the things good about the causal tree is--I'll make this short--but it chunks the information, so you see a picture in a very summarized way of a lot of information.

DR. BATTLES: And it's possible, the way we've encoded some of the systems, that you can actually rebuild the tree from the data in the--if you code it right, it prints out the original tree, so in the structure of the report format, it gives you a lot of information, and so if the investigation is done, you get a lot of power between a standard code--and the code, by the way, for the sort of what and where--are from the FDA. We're using Sharon's codes. Now, the ones for transfusions services, because there weren't very many FDA codes because of not requiring the report of the--

DR. NIGHTINGALE: Dr. Epstein. We will in general go around the room in alphabetical order.

DR. EPSTEIN: Yeah. I'd like to press a little bit on that point of codified versus narrative reporting. To my own way of thinking, the model of classification of the analysis can be seen independently from the value of reporting it codified. Jeanne Linden described a system that appeared to capture a great deal of information, and she commented that she far preferred narrative reporting, in essence because it gave nuanced information, it was richer. And I just wonder whether you could comment on whether there are objective data that you've been able to establish on the information content or utility of narrative versus codified reporting, but please compare the same classification system so that we don't confound the issue of classification and codified reporting.

DR. KAPLAN: I think we did--we had a Dutch graduate student who was in linguistics, looking at different input forms, and looking at the information content of coding and limited narrative versus just narrative. And I--at least at that type of measure, there was a better capture of data as a standard because you were asking specific questions, and the narrative was not excluded, it was just an amplification rather than strictly a narrative.

The issue perhaps though, I think is whether you're looking to create a database that you can access and manipulate, because it's very difficult to do that with narrative, as you know, and also it's easier to get the data from the file cabinet into the database, but you pay a price, and the experts in Aviation Safety Reporting are into narrative. You have to ask also if you really drive the DSL up, if you get a lot of reports, I guarantee you that the papers pile up, and it becomes a process of a bigger job getting the data into the database, rather than dealing with as individual cases and then manipulating the data.

Now, Jeanne, obviously uses some coding, and the people in the office do the coding. So it's a question of whether you have a distributive coding at the site or it's done primarily at a central point. And it's easier, I believe, and more effective, from my bias of making this a management tool where people have their own databases as well.

DR. NIGHTINGALE: This is Steve Nightingale. This is obviously a very important question, and I would propose that tomorrow, when we've had a chance to rest, we might be able to solicit more views, not only of the members at the table, but from the audience as well.

Once again, I'd like to go around alphabetically, Dr. AuBuchon, Mr. Allen, Dr. Busch have any questions? I think Dr. Guerra had a question?

DR. GUERRA: Right. I guess my question is: do you see this becoming part of an integrated national type of reporting system, not unlike what has taken place with the Vaccine Adverse Event System, which obviously is not as complex as this, but that I think provides some extremely valuable information out in the field.

DR. BATTLES: Absolutely, yes.

DR. GUERRA: I mean, what are the issues though related to--I guess the same thing of confidentiality and security and all of those kind of issues?

DR. KAPLAN: I think unlinking the data at the central point, once the information is transferred in an analogous way to Aviation Safety, and as Jeanne points out, to the extent that legislation can balance the individual's right to know with the public good, but I think confidentiality is a very critical issue.

DR. BATTLES: The other aspect is delinking individual institutions, because what you want is the information in a national level. You don't want it used to point fingers. So this is very, I think, critical as we get--and probably, given the nature of health care, I think the issues that have been pointed up from the IOM and others, we're probably going to need some help with legislation so that the information can more easily be shared.

DR. KAPLAN: There's also a level of reporting that we haven't discussed much. I was talking to Sue Bogner, and she brought up the issue, when--Ray has talked about this onion peel, different layers down, and the environment, and regulatory issues and financial issues play a role at its core, but they may be out, farther out. And we talked about X being beyond the stopping rule, is beyond the individual institutions' ability--or the investigating group's ability to control things. But once one does an external report, hopefully, the next level up will do a similar analysis, because I think another place that we need help is that--another kind if report that ought to go centrally is how often are issues about resources and decisions at what level that define operation. You're not just resource limited, but resource limitation does play a role in safety. And I think those kinds of things ought to go to a central database to the government.

DR. NIGHTINGALE: Next, before we get to Dr. Piliavin, I need to state for the record that after the role was taken, Drs. Busch, Kaplan, Epstein, Penner and Piliavin were able to join us. Are there any questions from the remaining--I believe Dr. Piliavin wished to--

DR. PILIAVIN: This is just a matter of confusion. It sounds like a fascinating system to me, but you were careful to say that you don't want it to be restricted to the people who think that they've committed the errors, and that the information can come in from a variety of places.

I'm wondering how you--if you have a system like this, how you check for duplication of events, and you also said something about looking into records. I mean, how does this work if you're having all of these multiple sources of information, or am I more confused than I think I am?

DR. KAPLAN: Well, that's a good question. I think at any individual site, if the QA SYSOP, if you will, gets these reports, it's not a mystery as to what case is being discussed. It may be a different perspective on the same information, and this building of the causal tree is really a driver for the investigation. It is also an end product, but it's a driver. It's an iterative process. So at any individual site you'll know that this is the same case. And since that site is going to report to some--if it's reported to a central body--then it's going to be a single report.

If everybody was calling outside the system, then that would create that replication.

DR. BATTLES: And the other activities of the other kind of reports would be a normal QA kind of function. So that any of the activities we suggest are those that are recommended as doing good quality assurance work. It's just getting the data collected from all those sources into a--sort of a managed event description management system.

DR. NIGHTINGALE: Are there any other questions from the committee members?

[No response.]

DR. NIGHTINGALE: If not, let me thank you both for the substantial physical, and the very substantial intellectual contributions you've made today.

Promptly at 8:00 tomorrow morning, we reconvene. Thank you all.

[Whereupon, at 7:26 p.m., the above-entitled proceeding was recessed.]