The limits of medical quality assurance systems - Medical Quality Management
Richard M. BurtonQuality Assurance (QA) via the process of review systems is a retrospective look at what was. It is a picture of the past. Any such system is bound to have limitations, because the past cannot be changed. In QA, the ultimate aim should be to educate physicians as to where they made mistakes so that they can learn how to prevent them in the future. The distribution of what mistakes can be avoided, so that all physicians can learn from others' mistakes, takes the whole team closer to the aim of real QA--preventing mistakes. The first part of this article looks at QA in general terms; the second part looks at inherent biases that should be removed so that the team reaches the goal of bona fide quality.
Quality equals conformance to requirements.(1) If quality is conformance to established and accepted norms, structure is derived from the actual establishment of the requirements or "norms" themselves. Along with the establishment of what the norms are (and are not) is the need to educate the team. Only a team that shares, understands, and holds the norms as its own values will be able to conform. Only those norms that have been given personal value will result in the personal responsibility to make the norms habits.
Quality is everyone's responsibility, from the chief executive to those at the lowest level of the organization.(2) The norms (standards) that are set are the organization's interpretation of the demands of the market-place. In a broader sense, the "marketplace" is a combination of legal forces, regulatory forces, political forces, societal forces, technological forces, economic forces, and competitive forces.(3) Within the externally driven establishment of norms, an organizational team must develop standards based on the above noted "forces" and their pushes and pulls on what "quality" must be, and what it cannot be. Beyond this, a team is charged with the need to set its own internally driven rules on what "quality added" norms will be built above the base requirements. This internally driven structural development is what gives the team a competitive advantage over others in the market-place.
The second step in creating quality is to establish mechanisms for evaluating how closely the team comes to the established norms. This involves regularly scheduled appraisal.(2) The requirements must be clearly stated so that they cannot be misunderstood. Measurements are then taken continually to determine conformance to those requirements. The nonconformance detected is the absence of quality. Quality problems become nonconformance problems, and quality becomes definable.(1)
The process of quality requires that comparisons be made. The purpose of comparisons is to get those moving who aren't moving. It is not simply to report the results.(1) A system of comparison should have defined parameters of measurement. In the larger scope of quality, measurements should include understanding of all team members as to what quality is, understanding of what falls short of quality, whether continuous measurements are being made about defined quality, and whether every member of the team has adopted the values of each quality issue that requires review.
One of the basic elements of quality assurance is the establishment of quality control. Without control of actions that ensure conformance to the requirements, the result cannot conform to the established standards. Without control, everyone is left to his or her own views of what quality is. The shared team approach, with control, is mandatory. Meeting specifications for the service or product is the result of a well-controlled program.(2)
It is always less expensive to prevent defects in quality than to correct them after they have happened. In this sense, controlled, measured quality saves expense.(1,2)
Feedback on what is done right or what is not done right is essential. Effective teams have an ongoing drive to match or exceed requirements. This step requires inspection of defined quality measurements, by everyone, on a regularly scheduled basis. Feedback comes from customers, literature reviews, and expert opinions. In the latter area, a panel of reviewers meets on a regular basis to compare quality (conformance to requirements) to outcome. With a list of required actions/measurements, the "ideal" is compared to the "practiced" behaviors or outcomes. When practice equals or surpasses the "ideal," quality has occurred.
The Critical Path Method (CPM) supplies a checklist of actions required, measured against those performed. It should be developed to show not only what needs to be done and when, but also to indicate clearly to whom each responsibility is assigned. A flag is raised when an action does not match the ideal.(2)
The ultimate goal of quality assurance is to have all actions match or exceed the established requirements. When something does not match norms, there is no quality. Outcome has suffered. But determination of the meeting of requirements must be objective. Objectivity comes not from placing blame on individuals but from placing blame on problems. The questions and probing must be aimed at the job. The job failed, not the individual. It may be that the two are imperfectly matched, and one must be changed.(1)
Going with the premise that the job is the target, we can deal with the problems in a systematic way. Problems are announced and discussed (at regular meetings), and reasons for them are assigned.(1) When job errors occur and quality is missing, the responsible team member(s) should be educated by the rest of the team as to what is expected in order to reach quality.
Nonquality job activities can only be corrected through the responsible person understanding the situation and taking personal responsibility to fix the problem. The team must ensure, through examination, that he or she knows what the quality values are, has taken them on as personal values, and has the tools to fix them. This person must have internally set personal standards to match those of the organization, or nonquality will result. When a person's quality falls below norms, the obvious step is to reset the standards, get the problem person to agree to those standards, and then be empowered (and personally responsible) to meet the requirements in a given time. Anything less is failure and should be met with defined penalties.
Teaching the "nonquality" person requires that the person understand what, specifically, he or she has failed to do, that personal power exists to fix the problem, that he or she has the personal responsibility to fix the problem, that there will be a penalty for continued nonquality, and that there will be a reward for quality. The reward may be as simple as staying on the job, but it should be defined. Finally, the manager must build an absolute trust that the reward or the penalty will be used, depending on what is accomplished. If these tools are not used, the message given is that quality is not important, and nonquality will result.
The Limits of QA Systems
The only thing a reviewer can be truly objective about is something about which he or she does not care. In trying to review someone else's chart, the reviewer may be swayed either positively or negatively. Positive swaying will occur when the reviewer has a good relationship with the reviewee and wishes to maintain that relationship. To damage his ego may cause hard feelings and hostility. Negative swaying may occur when there is already antagonism between the reviewer and reviewee. The only way this bias might be removed is by review of a photocopy of the typewritten chart with the name of the person being reviewed already removed. This step may be cumbersome and may remove other pertinent parts of the chart. While this sort of approach to removing personal bias may be ideal, it is probably impractical for most systems.
Most physician systems do not rate people on the basis of direct observation (the most accurate way). Instead, we rely on medical records and inference. We rate the chart and not the actual performance. What reviewers may fail to think about as they sit and judge the chart is that the chart is merely a map of what happened. Maps do not show what else was going on at the time the event occurred and give only a two-dimensional picture of a three-dimensional world. This bias may be at least partially reduced by verbally getting data from the reviewee before the review is turned in. He or she may have important insights not in the record.
Rating errors that may occur include the following:
* The halo effect--finding a few things to correct in the rating early on and being swayed to have less concern about an error found later in the care.
* The horn effect--finding a care error early and being swayed to look less kindly on events discovered later.
* The central tendency--to lump all events closer to average than they may deserve to be.
* The spillover effect--the last two charts seen from the reviewee had "bad care," thus the third chart by this same reviewee will be scrutinized harshly even if the care was wonderful (perhaps some of this bias can be removed by not having any one reviewer review more than one chart of each reviewee at any one sitting.
* Proximity--oen good event is found, and the next event is less severely scrutinized. A bad event is found, and the next event is looked at skeptically.(4)
Medical QA systems historically have been designed to look for "bad" events. The natural tendency of a reviewer might be to search records to find an error, thus justifying the reviewing role. If the reviewer finds a "bad" event, he or she is doing a good job. When judging another's actions, the usual reviewer uses confirmation strategies. Disconfirmation is rarely used. A system that asks the reviewer to look for the "bad" asks them to confirm that an error exists. They will tend to ignore "good" events that are disconfirmatory.(5)
Many systems use actuarial data rather than mere "clinical judgment." Actuarial systems are set up ahead of time with certain parameters to check as done or not done, yes or no, acceptable or not acceptable. This treatment of the real world as "yes-no" fails to incorporate human cognitive/intuitive processes. This may point out the need to talk with the reviewee before condemning him or her.(6)
There are two other limits of human appraisal. Judgments are frequently made on a only few items of data. Even when more data are added, the reviewer's mind remains unchanged. Reviewers gain confidence with more data, but not more insight.(5) Reviewers tend to have difficulty weighing information that is not mathematical. When weighing is needed, we often simplify and make an "educated guess."(5)
With all of the above bias and error potentials, one might ask, "How can we make rational judgments about the people we review?" First, we must teach QA review committees what the biases are. With this knowledge, they may be more likely to try to remove the biases and look at each event or a situation by itself. Next, most systems now say "adequate" or "inadequate" on chosen parameters. Perhaps physicians need a third column on the parameters they review that says "Excellent" so that people who do a good job in an area get that feedback as well. In fact, if the "excellent" part of a record is communicated to the entire staff as an example of what to do, others may try to copy such excellence and real quality may begin to spread. Third, if spillover is a concern, any reviewer should receive only one of an individual reviewee's charts per review session. Finally, reviewers have an obligation to get all of the data. They should talk to the reviewee by phone or in person to clarify areas of concern before the final analysis is set in type.
Subjectivity can never be totally removed from a human system that reviews human behaviors, but it may be reduced through education and communications. If we are really after QA, we cannot afford to ignore these and other lessons on performance appraisal.
References
(1.) Crosby, P. Quality Is Free: The Art of Making Quality Certain. New York, N.Y.: McGraw-Hill, 1979.
(2.) Fallon, W., Editor. AMA Management Handbook. Second Edition. New York, N.Y.: AMACOM, 1983.
(3.) Pride, W., and Ferrell, O. Marketing. Boston, Mass.: Houghton Mifflin, 1987.
(4.) Henderson, R. Performance Appraisal: Theory to Practice. Reston, Va.: Reston Publishing Co., 1980.
(5.) Faust, D. The Limits of Scientific Reasoning. Minneapolis, Minn.: University of Minnesota Press, 1987.
(6.) Alexander, J., and others. The Warrior's Edge. New York, N.Y.: William Morrow & Co., 1990.
COPYRIGHT 1992 American College of Physician Executives
COPYRIGHT 2004 Gale Group