Evaluating practice: The dual roles of clinician and evaluator
Slonim-Nevo, VeredABSTRACT
When single-subject design is the method used to evaluate clinical work, the same person simultaneously conducts counseling and evaluation. The author describes situations in which the roles of clinician and evaluator are compatible as well as circumstances in which difficulties in exercising the demands of both roles occur. The various phases of the treat ment/evaluation process are followed, and suggestions are given regarding ways to resolve differences between the clinician's and the evaluator's perspectives.
he view that clinical work with individuals, families, and groups should be systematically evaluated is well accepted. Evaluation is needed for accountability, increasing efficiency, determining the direction of treatment, involving the client in the treatment process, providing data about clients and their problems to agency policy and program decision makers, and developing empirically based models of practice (Bloom, Fischer, & Orme, 1995, Blythe & Briar, 1985; Briar & Blythe, 1985; Doueck & Bondanza, 1990; Gabor, 1989; Nuehring & Pascone, 1986; Rosen, 1993; Tripodi, 1996). In fact, the Council on Social Work Education expects schools of social work to teach graduate students how to evaluate practice. To meet accreditation standards, therefore, most schools of social work in the United States provide courses on treatment evaluation (Gingerich, 1990; Proctor, 1990; Robinson, Bronson, & Blythe, 1988).
One method of evaluationsingle-case design-is often used to assess the effectiveness of clinical interventions with individuals, families, or groups. It requires an explicit definition of the problem/ treatment goal, a clear definition of the intervention strategy, and repeated measures of the problem before and during the treatment process. The results are recorded in graph form to denote the extent of progress over time. The single-case design recording process enables the clinician to compare, both visually and statistically, the extent of a client's progress during the intervention phases against baseline information. From this information, conclusions can be drawn about the success of treatment, the continuation of an intervention showing positive results, the option of selecting another intervention strategy, or the need to terminate ineffective procedures (Bloom et al., 1995; Gingerich, 1983; Hayes, 1981; Levy & Olson, 1979; Slonim-Nevo & Vosler, 1991).
Unlike traditional approaches for assessment of clinical intervention, in which external researchers using experimental and quasi-experimental group designs evaluate the work of clinicians, single-case methodology enables clinicians to evaluate their own counseling work. Sometimes, however, providing treatment and conducting objective evaluation conflict, which leads some social scientists to question the feasibility of combining service with single-case design evaluation and experimentation (Kagle, 1982; Kagle & Cowger, 1984; Ruckdeschel & Farris, 1981; Thomas, 1978,1983). The purpose of this article is to describe difficulties associated with clinical service and evaluation and how single-case methodology may link the two via compromise. Specifically, issues related to the definition of the problem, selecting a design, measuring the problem, and defining the intervention are discussed. Recommendations for overcoming problems that limit the evaluation of treatment service are presented.
Specifying Problems And Goals
Definition. Single-case design evaluation requires that one or more of the client's problems or goals be selected and operationally defined for intervention (Bloom et al., 1995). This procedure of defining problems and goals is inherent in many of the existing therapeutic models that are directed at solving or improving the client's presenting problem, including behavioral therapy, cognitive intervention, strategic and structural family therapy, systemic brief counseling, problem solving, and task-oriented interventions (Benbenishty, 1988; Cooper, 1990; Pinkston, Budd, & Baer, 1989; Rabin, 1981; Slonim-Nevo & Vosler, 1991). Thus, clinicians using these therapeutic models have no conflict of interest between their role as counselors and their role as evaluators.
Disagreements. Using psychodynamic intervention models may create difficulty in satisfying both the requirement to define a specific problem and the therapeutic principles commonly used in this approach (Dean & Reinherz, 1986). In psychodynamically oriented interventions, an open contract is established in which clients are encouraged to process what they feel, think, or remember in each session without a commitment to a specific problem or goal. Moreover, clients may change their complaints from one session to the other, a situation that does not interfere with the therapeutic process (Casement, 1985). Insisting on selecting and defining a problem or goal for treatment might limit clients' opportunities to bring new complaints and to explore various topics without the need to focus on a specific subject. Further, a clinician who directs a client to select, define, and focus on one topic may be perceived by the client as formal, authoritative, or even intrusive, a perception that affects the client-clinician relationship. Under the psychodynamic framework, this relationship should be affected, as much as possible, by the client's experience and not by external factors introduced by the clinician.
An example from the author's practice illustrates this conflict: M, a 3-year-old woman, sought therapy because of various problems including inability to enter school because of fear of failure; ongoing negative thoughts about herseif; and feelings of rage, jealousy, and sadness. The clinician and the client could have prioritized the complaints and worked on one at a time. M's description of her childhood included poverty, abuse, and traumas at residential centers, which led the clinician to believe that an intensive exploration into her past experiences and their implications in her present life would be more beneficial. Moreover, during the initial session, M expressed a desire to have a sense of control over the therapeutic process. She said, All my life I was told what to do, I don't want you to give me advice about my life. I want to be independent.' M might have interpreted the clinicians selecting, defining, and focusing on a problem for treatment as an attempt to control her, which would have been counterproductive to treatment Instead, M was offered a treatment plan in which she could choose topics for discussion according to her concems and needs. An integrative type of treatment that employed elements from psychodynamic intervention and cognitive therapy was then conducted.
Potential solutions. In therapeutic situations, in which a specific problem is not defined and treatment is fairly unstructured, it is possible to assess how treatment affects a general phenomenon related to the client's situation rather than giving up evaluation altogether. In the above case, in addition to the list of complaints, M presented personal characteristics of low selfconfidence and general dissatisfaction with life. Thus, it was agreed that if she would feel better about herself and be more content with her life, treatment would be regarded as helpful. Hudson's Index of Self-Esteem and Hudson's Generalized Contentment Scale (Hudson, 1982) were selected to measure these general conditions. A similar approach, in which general phenomena are measured for the purpose of evaluation, was presented by Mutschler (1979), who trained clinicians to evaluate their psychodynamically oriented practice with families and children (see Applegate, 1992; Berlin, 1983; Dean & Reinherz, 1986).
Selecting the Design
After the problem is specified, an evaluation design should be selected. Clinician-evaluators may select sophisticated designs that improve their ability to determine whether the intervention itself, and not other factors (e.g., time, maturation, or instrumentation), explain the change in the client's condition (Cook & Campbell, 1979) or they may choose simple designs that are less capable of showing a direct link between intervention and change but are easy to use and fit well with clinical settings (Mutschler, 1979, Slonim-Nevo & Vosler, 1991).
The basic single-case design is the A-B design. "A" represents conditions prior to intervention (a baseline phase) and "B" the intervention phase. It enables examination of change in the client's condition during the intervention phase as compared with the condition during the baseline phase. Among the experimental designs are the A-B-A-B replication design (baseline phase, intervention phase, reversal to baseline, and reinstatement of the intervention), and the B-A-B design (intervention, baseline, and reinstatement of intervention). Multiple designs are suitable for simultaneous work with several target problems, settings, and clients. They include the multiple baseline design and the multiple target design. Successive intervention designs allow the possibility of changing interventions, for example, from the A-B-A-C design (baseline, intervention B, reversal to baseline, and intervention C phases) to the A-B-C design (baseline, intervention B, and intervention C). Finally, complex and combined designs enable the evaluator to examine different effects of specific interventions. For example, the A-B/C-B-or-C design (baseline, randomized alternation of two interventions B and C, and a phase in which either B or C is employed) provides the opportunity to determine whether B or C is the more effective intervention (Bloom et al., 1995).
Because the evaluator's primary concern is to determine whether intervention has been effective and the clinician's primary interest is to create a positive change in the life of the client, the two are likely to assess the qualities of the various designs differently. First is the issue of usability. The clinician is likely to prefer the basic A-B single-case design, which is easy to use and does not require complicated advance planning. Baseline measures can be taken during the intake sessions or at home, and the intervention can be subsequently implemented (Mutschler, 1979; SlonimNevo & Vosler, 1991). This design, however, cannot demonstrate that a change in the client's condition is related to the impact of the intervention itself rather than to external factors, such as history or maturation (Bloom et al., 1995). Therefore, the evaluator is likely to prefer more sophisticated evaluation methodology found in experimental, multiple, or complex designs.
A simple solution to this difference does not exist. Rather it is a matter of choice: to select a practical but less powerful design or to select a more sophisticated design that is less usable in the field. If the latter alternative is selected, other issues may arise. The following sections discuss these matters, including issues related to advanced planning, carryover, contrast and order of presentation, interference between interventions, and dependency among client's problems.
Advanced Planning
Definition. Selecting experimental and complex designs requires meticulous planning in advance. For example, when using the A-B-A-B design, one must plan when to stop the intervention and return to baseline and when to initiate it again. With a multiple baseline design one should determine in advance when to start treating each of the client's specified problems. With a successive intervention design, such as the A-B-A-C design, the clinician should plan when to use these two interventions (B and C) and when to reverse to baseline.
Disagreements. Clinicians may find it difficult to conduct such planning for at least two reasons. First, regardless of the therapeutic orientation, clinicians tend to intervene first and then observe how the client is reacting to the treatment, whether the intervention is helpful, and how the relationship with the client is developing. These observations help them decide whether they should stop intervening and return to baseline, if another intervention strategy should be selected, or if it's time to start treating the client's other problems Second, clients tend to stay for a few sessions, then drop out of counseling, regardless of their clinical condition. Therefore, prior planning in order to use experimental and complex designs may result nonetheless in an A-B design, at best (Thomas, 1983).
Potential solutions. How can the problem of advanced planning be resolved? Simply, the clinician's perspective may "win" and a basic design is selected. Sometimes, however, the field setting provides opportunities to use more sophisticated designs without imposing too many demands on the therapeutic process. For example, the A-BA-B experimental design may be planned according to holidays and vacations of both client and clinician. The second A phase (return to baseline) could occur during vacation time. The length of each phase can be determined according to clinical demands and vacation times and not according to the evaluation requirements (i.e., equal phases). Moreover, the data obtained during holidays may be biased by stress related to holiday periods. But a quasi replication design may be obtained, which is stronger than the basic A-B design.
If it is known at the clinic, for example, that a therapeutic group suitable to the client's problems and characteristics is about to be opened in a few weeks, the clinician can suggest that the client start with individual counseling (B) and after a short break (A) move into group therapy (C). Thus, without intense advanced planning, a successive intervention design (A-B-A-C) could be implemented. In other words, complex designs are likely to be used in clinical settings if they are in accordance with the ordinary plans and schedule of the clinic. Carryover
Several complications may arise when one uses more sophisticated designs. Carryover is a cornmon complicating factor.
Definition. Carryover describes a situation in which the effect obtained in one phase "carries over" into the next phase (Bloom et al., 1995). For example, in the A-B-A-B experimental design, the effect obtained in the initial phase B is carried over to the second baseline phase A. This complication may occur also in complex and combined designs, such as the A-B-C design, in which the effect of intervention B is carried over to the phase of intervention C, or in the A-B-A-C design, in which it is impossible to reverse to baseline. Disagreements. Carryover is clearly the problem of the evaluator. In the A-B-A-B design, for instance, if the client continues to improve during the second baseline phase, it is difficult to determine whether this is because of a powerful intervention whose effect is continued and cannot be reversed or is simply an improvement resulting from other factors such as time or maturation. The evaluator would prefer to be able to reverse to the second baseline phase and to show how the client improves only when the intervention is presented during the B phase.
The clinician, on the other hand, is likely to be satisfied with the fact that the client is continuing to improve and such improvement cannot be interrupted. Clinicians are often unable to determine which factors created a change in the client's condition. They assume that many elements, not just the intervention, are responsible for improvement. Thus, the clinician who anticipates the joint impact of several factors may welcome a carryover condition in which the improvement continues over time. Similarly, in the A-B-C successive intervention design, the evaluator wishes to determine whether it is intervention B or intervention C that created the change in the client's condition. The clinician who accepts the idea that perhaps a bit of B and a bit of C, in an unknown combination, is related to the improvement is less concerned with this complication.
Although the clinician and the evaluator may have different perspectives on the carryover factor when a client improves, they both are likely to be concerned with it when the client's condition deteriorates. In terms of this condition, it is also important for the clinician to know if a harmful intervention should be terminated or if other factors may explain the deterioration and, therefore, the clinician may continue the treatment. In the A-B-A-B design, for instance, if the condition becomes worse during the intervention phases, but improves during the baseline phases (i.e., no carryover effect), the clinician can clearly see that the intervention used in phase B should be stopped. The carryover complicating factor limits the ability to make such clear clinical decisions.
Potential solutions. Although clinicians and evaluators may have different perspectives on the issue of carryover, the remedy for this factor, awareness, is shared by both. Because the clinician and evaluator are actually one person, they both can be aware of this complicating factor while interpreting the data. In this way the conflict between the two roles' representatives is somewhat resolved.
Contrast and Order Of Presentation
Definition. These complicating factors may be at play in complex and combined designs in which two or more successive interventions are presented. The first complicating factor relates to a situation in which the client reacts to the contrast between two interventions and not to the impact of the second intervention itself. The second complicating factor relates to a situation in which the data are affected by the order in which the interventions are presented and not by the later intervention itself (Bloom et al., 1995). For example, in the A-B-C design in which B is individual counseling and C is group therapy, data showing improvement during the group-treatment phase may be related to the contrast between individual and group counseling and not to the effect of group treatment itself. Or the improvement may be related to the fact that individual counseling was presented first, followed by group counseling, and that the data would have been different had group counseling been presented first.
Disagreements. Evaluators are concerned with these two complicating factors because the factors limit the ability to determine which intervention is more effective. Clinicians are likely to be less concerned with the problem of contrast and order of presentation, particularly if the client's condition is improving. As noted earlier, clinicians tend to assume that many factors, including complicating factors, are responsible for success; because their primary goal is to improve the quality of their clients' lives, they may be satisfied with improvement without knowing what exactly caused it. In the above example, a clinician may think both treatments, individual and group counseling, are helpful to the client. Or maybe the client can now enjoy the group work because of what he or she has experienced during the individual treatment. Or it may be that the contrast between the two strategies affects the situation. Nevertheless, it's good to see how well he or she is doing. That doesn't mean anything goes. Clinicians are professionals who use known and acknowledged therapeutic models but may accept an improvement without being able to show that only their interventions made the change. On the other hand, if the client's condition becomes worse during the second intervention phase (e.g., during group counseling), clinicians ought to know whether this intervention is harmful to the client or whether the deterioration is related to contrast or order of presentation.
Potential solutions. What kinds of remedies may be helpful? Regarding contrast, Bloom et al. (1995) suggest that "perhaps reducing the contrast between interventions might minimize this type of problem" (p. 304). Regarding order of presentation, they suggest "a randomized order of presentation so that no known bias in the order of presentations of the interventions will be acting on the client. On the other hand, a more sophisticated design may be used in which interventions are presented in deliberately counterbalanced sequences" (p. 304).
These suggestions may fit evaluators. However, it is unlikely that a clinician-who selects interventions according to the client's needs, the methods typically used in the clinic, and the practitioner's professional experience-will search for interventions with a reduced contrast between them because of the evaluation's constraints. Similarly, it is a very openminded and highly experienced clinician who is able to "play" with randomization of interventions and counterbalanced sequences for the sake of evaluation. Most clinicians are likely to feel uncomfortable with such maneuvers.
Although the gap in perspective between evaluators and clinicians regarding contrast and order of presentations seems significant, these complications arise only when the purpose of evaluation is to compare the relative effectiveness of two or more treatments (Proctor, 1990). Fortunately, this situation is rare in most clinical settings. Usually clients either undergo one type of treatment or they receive a combination of treatments provided simultaneously. For example, in an outpatient clinic, clients are often assigned to a single clinician who provides individual counseling using either one therapeutic model or an integrative type of treatment. In hospitals or residential centers, clients are likely to be exposed to more than one practitioner and one model of intervention but are exposed to all of them together and not in successive order. Nonetheless, if a conflict of interest regarding these two complications does arise, the feasible solution is awareness; that is, the person who is both a clinician and an evaluator should be aware of these problems while interpreting the data. Interference Between Interventions
Definition. In some of the complex and combined designs, two or three interventions are provided simultaneously. For example, in the A-BC-A-B design, a client may receive antidepressive medications combined with cognitive therapy (BC); following a baseline phase (A), cognitive therapy is terminated, but the client continues to take the medications (B).
Disagreements. Evaluators would prefer to avoid such an occurrence because it impairs their ability to determine what accounts for improvement in the client's condition: medications, counseling, or a combination of the two. Clinicians, on the other hand, may prefer to administer various therapeutic interventions simultaneously, hoping that attacking the problem from different directions may be beneficial to the client. Furthermore, in 24-hour treatment centers such as hospitals, residential centers, shelters, and half-way houses, clients are exposed to concurrent interventions, including individual counseling, medications, rehabilitation, and group work, which makes it very difficult to isolate intervention strategies for the purpose of evaluation.
Here again, if the client's condition is improving, clinicians are likely to accept the idea that a package of interventions was effective without knowing the exact contribution of each element in the package. However, if the condition becomes worse, clinicians ought to know which intervention is not helpful or perhaps even harmful to the client.
Potential solutions. As with the other complicating factors, this problem has no real solutions. Separating interventions is possible in outpatient settings with clients who have less severe problems and therefore do not need multiple treatment modalities. Alternatively, both evaluator and clinician may regard a package of interventions as a single global intervention and evaluate its effectiveness.
Dependency Among Client's Problems
Definition. The multiple baseline design across clients' problems is recommended highly when clients have more than one problem and an experimental design cannot be easily employed. In this design, two or more specifically defined problems or goals of the client are selected. First, baseline measures are taken for all problems. Then the clinician treats the first problem while continuing to take baseline measures for the others. When the conditions related to the first problem begin to improve, the clinician uses the same intervention to treat the second problem, the third, and so on. For example, a client may suffer from negative thoughts about him- or herself, lack of social contacts, and problems with a boss at work. The clinician takes baseline measures for the three problems, then begins treating the client's difficulties with the boss using cognitive therapy while continuing to record information on all problems.
Following improvement in the first problem area, the clinician treats the client's lack of social contacts and finally negative thinking using cognitive therapy. If each problem is resolved only after the intervention is introduced, it is possible to claim that the intervention, not other factors, is responsible for the improvement (Bloom et al., 1995).
Disagreements. The evaluator, in order to show the effectiveness of the intervention, requires that no problem depend on another problem and that each problem be affected separately by the intervention. In regard to the above example, to show that cognitive therapy works for the client, improvement in the first problem being treated (conflicts with the boss at work), should occur independently, and no improvement should occur in the other two problems. Similarly, improvement in the second problem treated (lack of social contacts) should also occur independently without improvement in negative thinking. Each problem should be resolved only when the intervention is introduced.
The clinician may view the situation differently. Usually, when clients improve in one area of their lives, they are likely to improve in other areas as well. In regard to the above example, a client who has resolved conflicts at work and has more social contacts is likely to have less negative thinking about him- or herself. Furthermore, not only do clinicians expect this kind of dependency among problems, they often welcome it. In brief counseling, for example, clinicians treat a single problem assuming that the resolution of a specific problem will affect other areas of the client's life (Fisch, Weakland, & Segal, 1986).
Potential solutions. Conflict of interest between the clinician's perspective and the evaluator's cannot be easily resolved. However, because the clinician and the evaluator are one person, he or she can be satisfied in both events: if no dependency exists among the problems and the intervention is effective after it is introduced, the evaluator's side of this person is satisfied; if an improvement in one area of the client's life affects other areas, the clinician's goal is achieved.
Measuring the Problem Goal
In measuring the client's problem, standardized scales with established validity and reliability may be used. Self-made scales specifically devised to assess the client's unique situation (these usually have unknown reliability and validity) are also recommended. Self-made scales are either self-anchored or rating scales. The self-anchored scale is constructed jointly by the clinician and the client to measure the client's thoughts and feelings.
The client describes (anchors) the worst scenario, the best scenario, and the situation in between, and the descriptions are placed on a scale (e.g., "1" is the worst, "9" is the best, etc.). Rating scales are similar, but someone else (a relative or a teacher) defines the points on the scale and records the data. Observations in vivo or in the clinic can be used to record the client's behavior as well (Bloom et al., 1995).
As with the selection of the problem or goal for treatment and the selection of the design, clinicians and evaluators may prefer different types of measures and may have different perspectives on issues related to measurement. The following three potential differences may be particularly relevant to clinicians who wish to evaluate their practice.
Devotion of Time
Disagreements. Clinicians are likely to prefer simple and few measures that require minimal time to record. Evaluators are likely to prefer sophisticated and multiple measures that depict the problem from different angles.
Potential solutions. Several steps may be taken to reduce the time related to measurements and thus increase clinicians' willingness to use standardized and self-made scales. First, organizational support is helpful in reducing time costs (Robinson et al., 1988).
Many clinicians working in public organizations are required to fill out forms about their clients and write narratives about their progress. Agency administrators and supervisors may accept graphs showing a client's progress. In this way, taking measures and recording results can be done during the time previously allotted for writing narratives and filling out forms.
With regard to standardized scales, depending on the problem types common to the agency, suitable and short scales could be accumulated in the office. In fact, numerous rapid-assessment instruments have been developed specifically to meet the need for quickly administered, useful, valid, and reliable instruments (Corcoran & Fischer, 1987). In this way, clinicians need not search for an appropriate measure outside, but need simply to use those scales that are readily available in the agency. Similarly, regarding rating or self-anchored scales, forms can be prepared and be ready for use in the agency. Administrators can also assign a staff person to be responsible for collecting standardized measures, making them available, recording data on graphs, and so on. All these activities, if done by a staff person, will reduce time and encourage clinicians to collect data about their clients. Incoming Data
Definition. Data obtained from clients are often incomplete. Clients forget to fill out forms at home, miss sessions, and drop out of counseling, or time elapses between the baseline phase and the initiation of treatment.
Disagreements. This very common complicating factor limits the ability of evaluators to analyze data and draw conclusions about the effectiveness of the intervention. Clinicians, however, are accustomed to this phenomenon and expect clients to forget homework assignments and drop out of counseling. In fact, data indicate that the median number of sessions in mental health settings is between five and six (Fisher, 1980).
Potential solutions. One way to regulate the problem is to take measures during the sessions and avoid requesting clients to do that at home. Baseline data can be obtained via the telephone two to three times a week prior to initiation of the intervention phase and at the beginning of the first intervention session. Then standardized scales can be filled out approximately once a month and self-anchored or rating scales during the first five minutes of each session.
Although using other techniques of recording may yield more points of measurement, this method of collecting information increases the likelihood that the evaluator will have data to examine. Reactivity
Definition. "Reactivity simply means changes that come about in the client or in the problem due to the act of measurement itself" (Bloom et al., 1995, p. 51). Selfmonitoring of one's behaviors, feelings, or thoughts may foster a sense of participation in the treatment and enhance self-awareness and thus improve the client's condition (Applegate, 1991). Indeed, various authors have documented how selfmonitoring of overt behaviors, as well as of thoughts and feelings, acts as an agent of positive change (see Applegate, 1991, 1992). On the other hand, Applegate (1991, 1992) reported that self-monitoring of subjective states with standardized scales is not reactive.
Disagreements. Whereas the clinician may view measurement as a therapeutic tool, the evaluator prefers nonreactive measures. This gap exists only when self-monitoring improves the client's situation. If self-monitoring of behaviors or thoughts harms the client, the clinician and the evaluator are likely to prefer nonreactive measures.
For example, if monitoring the frequency of shouting at children helps to reduce this behavior, a clinician would encourage such recording, whereas the evaluator would search for nonreactive ways to measure this problem. However, if self-monitoring of obsessive thoughts, for instance, brings out these thoughts, both the evaluator and the clinician would prefer nonreactive measures. Once again, when the client's condition improves, the gap between the two roles grows, but when the condition becomes worse, the gap narrows. No real solution to this gap exists, although awareness of the complexity of the situation helps with selecting measures and interpreting data.
Defining and Monitoring The Intervention
Definition and disagreements.
Intervention in single-case design is analogous to the independent variable in experimental research designs. Therefore, from the evaluator's perspective, it is important to define operationally the intervention and monitor its implementation in the field. It is expected that some type of monitoring plan will be instituted that would permit one to determine whether the planned intervention was in fact applied and what techniques were used and how frequently and how intensively they were used (Rosen, 1993).
Clinicians may find it difficult or even not advantageous to define in advance the specific details of the intervention. Planning in general terms may work better After the initial evaluation of the situation, a clinician is likely to have some general ideas about how to help the client. He or she may have a plan about whom to invite to counseling-the individual alone, the couple, or the whole family. The clinician may have some general ideas regarding the initial stage of the therapy. Many of the specific details of the process, including the techniques to be used, their frequency and intensity, length of counseling, and the contents of the sessions, are likely to be determined by the way the treatment progresses and, therefore, cannot be planned in advance. Even at the beginning of a session, a clinician may not know how it will eventually turn out, let alone be able to describe in advance the whole therapeutic process in concrete and operationally defined terms. Clinicians are professionals whose work is based on objective knowledge; however, intervening in someone's life also involves intuition, creativity, manipulation, and spontaneity.
The difficulty of advance planning is shared by psychodynamically oriented or integrative interventions, but also by more structured interventions. For instance, using cognitive/behavioral therapy to help a client dealing with depression, a clinician may describe in general terms how negative thoughts will be disputed, humor used to put things in perspective, reframing utilized to challenge unfortunate events, and homework assignments given to practice alternative ways to handle difficulties (McMullin, 1986). However, the specific techniques, their frequency, intensity, contents of sessions, significant others invited to sessions, length of treatment, type of homework assignments given, and the like cannot be preplanned and will be determined according to the client's reaction to the treatment. Perhaps the client will respond well to reframing and disputing of negative thoughts, and the clinician will likely repeat these techniques and use them as homework assignments.
But the client might reject these types of strategies, stating, "I have tried this before, and it doesn't work." Then the clinician may move to more behavioral types of assignments, invite close relatives to the sessions, or suggest medication.
Potential solutions. How can this issue be resolved? Clinicians know what type of therapeutic orientation they tend to use and what techniques they feel comfortable with. Often they will use an integrative approach that may include a combination of techniques taken from different therapeutic modalities. Thus, before the intervention phase starts, clinicians will be able to describe in general terms what kind of intervention they are planning to use. However, to obtain a more precise definition of the intervention utilized during the intervention phases, clinicians may fill out a prepared form at the end of the session rather than define the intervention in advance. This form may be completed after each session in case of short-term treatment or once a month in longerterm treatment. Also, clinicians may audiotape and videotape several sessions, then describe what they did after the intervention has already taken place. This way of monitoring the intervention does not require preplanning of interventions, yet it allows for a definition of the intervention in clear and concrete terms.
Conclusion
Many authors have noted the difficulties facing social workers who use single-case design. Among these difficulties are lack of organizational support, lack of information about how to apply the methodology to clinical practice, lack of empirical data showing that the use of this methodology can increase practitioners' effectiveness, and, recently, lack of clarity regarding how to analyze and interpret data obtained via singlecase methodology (Gingerich, 1990; Richey, Blythe, & Berlin, 1987; Robinson et al., 1988; Rubin & Knox, 1996). Therefore, the fact that 88% of American social workers who were surveyed by Penka and Kirk (1991) had conducted no single-case evaluation following their graduation is not surprising.
Rubin and Knox (1996), who described data-analysis problems in single-case evaluation, concluded that social work educators should "reduce the emphasis on single-case evaluation in research curriculum" (p. 61). This article took a different approach. Although difficulties associated with clinical service and evaluation are described, an attempt was made to show how the two functions could be linked through compromise. Such compromises were offered regarding the specification of treatment goals, selection of an evaluation design, measuring the problem, handling incomplete data, and defining and monitoring the intervention.
Gingerich (1990) suggested, "If evaluation is an integral part of practice ... the problem should be approached from the perspective of practice, and the needs and requirements of practice should be addressed" (p. 16). Following Gingerich's suggestion, I have addressed the perspective and needs of practice. It is anticipated that taking this approach will indeed encourage social workers to combine evaluation and clinical work.
Future studies should examine whether developing and teaching the methodology for evaluation from the perspective of practitioners rather than that of researchers lead to more evaluation in social work practice.
References
Applegate, J. S. (1991). Measurement as treatment: Are subjective self-ratings reactive? Journal of Social Service Research, 14(3-4), 45-62.
Applegate, J. S. (1992). The impact of subjective measures on nonbehavioral practice research: Outcome vs. process. Families in Society, 73,100-lOS. Benbenishty, R. (1988). Assessment of taskcentered interventions with families in Israel. Journal of Social Service Research, 11(4), 19-43.
Berlin, S. B. (1983). Single-case evaluation. Another version. Social Work Research and Abstracts, 19(1), 3-11. Bloom, M., Fischer, J., & Orme, J. G. (1995). Evaluating practice: Guidelines for the accountable professional. Boston: Allyn & Bacon.
Blythe, B. J., & Briar, S. (1985). Developing empirically based models of practice. Social Work, 30, 483-488. Briar, S., & Blythe, B. J. (1985). Agency support for evaluating the outcomes of social work services. Administration in Social Work, 9(2), 25-36.
Casement, P. (1985). On learning from the patient. London, England: Tavistock Publications.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis for field setting. Chicago: Rand McNally.
Cooper, M. (1990). Treatment of a client with obsessive-compulsive disorder. Social Work Research and Abstracts, 26(2), 26-32.
Corcoran, K., & Fischer, J. (1987). Measures for clinical practice. New York: Free Press.
Dean, R. G., & Reinherz, H. (1986). Psychodynamic practice and single-system designs: The odd couple. Journal of Social Work Education, 22(2), 71-81. Doueck, H. J., & Bondanza, A. B. (1990). Training social work staff to evaluate practice. A pre/post/then comparison. Administration in Social Work, 14(1), 119-133.
Fisch, R., Weakland, J. H., & Segal, L. (1986). The tactics of change: Doing therapy briefly. San Francisco: JosseyBass.
Fisher, S. G. (1980). The use of time limits in brief psychotherapy: A comparison of six-sessions, twelve-sessions, and unlimited treatment with families. Family Process, 19, 377-392. Gabor, P (1989). Increasing accountability in child care practice through the use of single case evaluation. Child and Youth Care Quarterly, 18, 93-109. Gingerich, W. J. (1983). Significant testing in single-case research. In A. Rosenblatt & D. Waldfogel (Eds.), Handbook of clinical social work (pp. 694-720). San Francisco: Jossey-Bass. Gingerich, W. J. (1990). Rethinking singlecase evaluation. In L. Videka-Sherman & W. Reid (Eds.), Advances in clinical social work research (pp. 11-24). Silver Spring, MD: NASW Press. Hayes, S. C. (1981). Single case experimental designs and empirical practice. Journal of Consulting and Clinical Psychology, 49, 193-211.
Hudson, W. W. (1982). The clinical measurement package: A field manual Homewood, IL: Dorsey Press. Kagle, J. D. (1982, spring). Using singlecase measures in practice decisions: Systemic documentation or distortion? Arete, 7, 1-9.
Kagle, J. D., & Cowger, C. D. (1984). Blaming the client: Implicit agenda in
practice research? Social Work, 29, 347-35I.
Levy, R. L., & Olson, D. G. (1979). The single-case methodology in clinical practice: An overview. Journal of Social Service Research, 3, 25-49. McMullin, R. E. (1986). Handbook of cognitive therapy techniques. New York: W. W. Norton.
Mutschler, E. (1979). Using single-case evaluation procedures in a family and children's service agency. Journal of Social Service Research, 3(1), 115-134.
Nuehring, E. M., & Pascone, A. B. (1986). Single-case evaluation: A tool for quality assurance. Social Work, 31, 359-365.
Penka, E. P., & Kirk, S. A. (1991). Practitioner involvement in clinical evaluation. Social Work, 36, 513-518. Pinkston, E. M., Budd, K. S., & Baer, D. M. (1989). EvaluationIn B. A.
Thyer (Ed.), Behavioral family therapy (pp. SS-77). Springfield, IL: Charles C Thomas.
Proctor, E. K. (1990). Evaluating clinical practice: Issues of purpose and design. Social Work Research and Abstracts, 26(2), 32-40.
Rabin, C. (1981). The single-case design in family therapy evaluation research. Family Process, 20, 351-363. Richey, C. A., Blythe, B. J., & Berlin, S. A. (1987). Do social workers evaluate practice? Social Work Research and Abstracts, 23(2), 14-20 Robinson, E. A. R., Bronson, D. E., & Blythe, B. J. (1988). An analysis of the implementation for single-case evaluation by practitioners. Social Service Review, 62, 285-301.
Rosen, A. (1993). Systemic planned practice. Social Service Review, 67, 84-100.
Rubin, A., & Knox, K. (1996). Data analysis problems in single-case evaluation:
Issues for research on social work practice. Research on Social Work Practice, 6, 40-65.
Ruckdeschel, R. A., & Ferris, B. E. (1981). Assessing practice: A critical look at the single-case design. Social Casework, 62, 413-419.
Slonim-Nevo, V., & Vosler, N. R. (1991). The use of single-system design with systemic brief problem-solving therapy. Families in Society, 72, 38-44. Thomas, E. J. (1978). Research and service in single-case experimentation: Conflicts and choices. Social Work Research and Abstracts, 14(4), 20-31. Thomas, E. J. (1983). Problems and issues in single-case experiments. In A. Rosenblatt & D. Waldfogel (Eds.), Handbook of clinical social work (pp. 603-622). San Francisco: Jossey-Bass. Tripodi, T. (1996). A primer on single-subject design for clinical social workers. Washington, DC: NASW Press.
Vered Slonim-Nevo Is senior lecturer, the Spitzer Department of Sodal Work Ben-Gurion University of the Negev, Beer Sheva, Israel. The author thanks Richard Isralowltz for his valuable comments on an earlier version of this article.
Copyright Family Service America May/Jun 1997
Provided by ProQuest Information and Learning Company. All rights Reserved