Manageable evaluation
Henderson, Donna ACounselors need information. They engage in a complex profession, working with a variety of people in different types of settings. To fulfill the inherently diverse expectations, counselors must monitor and assess their performance and the impact of their practices on the people with whom they work. In addition, the competent management of a program involves monitoring the program operations. The activities of a counseling practice and the nature and extent of the program delivery are important concerns when deciding about allocating resources, staffing, and services for clients. Having information about the outcomes of particular interventions helps in making decisions about expansion or elimination of services. Evaluative data contribute to all these decisions (Lewis & Lewis, 1991).
Hadley and Mitchell (1995) provided methods to implement counseling research and program evaluation. Their book would prove helpful to one studying those subjects and incorporating research and evaluation into practice. Undoubtedly, counselors will be more effective, have more impact, and be more efficient if they will incorporate an evaluation system into their practices. Counselors will find in this article a brief investigation of the benefits of evaluation, a summary of three approaches to the practice of evaluation, and the possible scope of the evaluation practice. After this review, I present an outline of the steps for planning a system of evaluation and some tools for implementing a manageable evaluation plan. Following the steps and tools the reader will find a description of a proposed matrix that may serve as a blueprint for including an evaluation system into a counseling practice. Components of the matrix have been identified from the above-mentioned reviews; the content of the evaluation system would be decided based on the needs of the counseling practice. A brief discussion of some possibilities for that content is the final part of the article.
BENEFITS
The list of benefits that could be credited to a system of evaluation is extensive. Four are provided in the following sections.
The Learning Organization Lyons, Howard, O'Mahoney, and Lish (1997) have devoted their recent book to a thorough examination of creating a learning environment in a clinical practice. Their proposal includes the careful measurement of clinical outcomes. Steenbarger, Smith, and Budman (1996) likewise proposed the learning organization as the future goal of counseling practices.
Senge (1990) remarked that in a learning culture information is developed and collected to aid in improving productivity, quality, profitability, or morale. He suggested that people learn to recognize "circles of causality" and to move beyond linear thinking and simplistic solutions. He proposed certain aspects of thinking that promote the learning organization. Some of these thinking patterns are recognizing interrelationships and processes, distinguishing complexity, and avoiding symptomatic solutions.
In their book about developing a learning organization in a mental health setting, Lyons et al. (1997) stated that the following would be required to develop a learning organization in a mental health setting: formal mechanisms for (a) identifying and collecting information, (b) analyzing and disseminating the data, and (c) creating changes that the information suggests. A formal evaluation system could serve as an instrumental mechanism in that process.
Senge (1990) recognized five "component technologies" critical to the innovative learning organization. Three of these components are directly related to the practice of counseling.
1. Systems thinking. Counselors recognize the interrelated aspects of the individual in the world and of the world on the individual, a systemic perspective.
2. Personal mastery. The definition Senge provided could substitute as generic counseling goals. He stated that personal mastery is "the discipline of continually clarifying and deepening our personal vision, of focusing our energies, of developing patience, and of seeing reality objectively (p. 7)."
3. Working with mental models. Senge explained that the process of using mental models includes looking inward to uncover and scrutinize internal perceptions of the world. Counselors will also recognize this process.
These three components are integral in a counselor's work; therefore, it would seem counselors are in an enviable position to create learning organizations.
Quality
Industry and commerce have models driven by quality control and quality assurance procedures. Winch (1996) identified four approaches to quality in these models.
1. Equating quality with excellence (associated with the influential work of Peters and Waterman, 1982).
2. Quality as a precise, measurable, and integral component is a second approach (Crosby, 1979).
3. Judging worth by whether the product meets the needs of the person who uses it (Juran, 1993).
4. The definition of quality as value for money (Feigenbaum, 1991).
Whichever definition is chosen, the counselor can recognize the association of quality to a counseling practice.
Total quality management (TQM) involves the concept of improving the quality of a product, emphasizing not only customer satisfaction but also the involvement of employees in quality assessment (Eisen & Dickey, 1996, Deming, 1986). Not only have these approaches been influential in industrial settings, but also the applicability of total quality to health care setting is being explored (Berwick, 1989; Chowanec, 1994; JCAHO, 1991). Those involved in the counseling enterprise cannot be indifferent to quality or to maintaining excellence as determined against clear and recognizable standards. For example, Steenbarger and Smith (1996) recognized counseling outcomes, client satisfaction, and adherence to professional standards of care and service delivery as indicators of quality in counseling services.
Comparison of the standards and criteria on which a program was based to the delivery of the program is a tool for enhancing program quality (Winch, 1996) and a recognizable focus of evaluation. This use of evaluation helps in maintaining the connection between the goals and services of the program. Activities of counselors also may be compared to standardized norms. Johnson and Shaha (1996) suggested that practitioners shift to a "feedback loop" so information about change and client satisfaction at each session serve as guides to the process of therapy. Establishing this loop allows a concern with processes as they affect results, providing both feedback and momentum toward improvement (Schmoker, 1996)-undeniably the goal of any counselor. The advantages of creating quality improvement practices are professional skill improvement, practice groups that can demonstrate quality and satisfaction, and more effective treatment options based on their study of treatment results (Johnson & Shaha, 1996). A model of continuous quality improvement builds on the reasons for the counseling practice, the customers' voices, and the components of the process, process improvement, and continuous learning and improving.
Accountability
Counselors need to determine methods for ensuring integrity of treatment (Whiston, 1996). One use of accountability is to justify a customer's satisfaction or dissatisfaction. Funding agencies, elected officials, policymakers, and the general public can be served by evaluation that gives evidence of the value of the counseling services. Another reason for concern with accountability is offered by Froehle and Rominger ( 1993), who noted that counselors might need to justify their services for third-party reimbursement.
Having information to defend the continuation of a counseling program in the face of budget cuts is another cause for accountability practices (Marten & Heimbert, 1995). Accountability is also essential to employability and responsible treatment planning, according to Granello and Granello (1998). Finally, Plante, Couchman, and Diaz (1995) maintained that determining client satisfaction and evidence of treatment outcome are necessary for counseling success.
Personal Development
The terms quality and accountability may seem to imply exclusively external standards for evaluative criteria and the purposes of evaluation. Procedures related to quality and accountability help inform the careful counseling practice and provide the cornerstone of effectiveness and efficiency. Using evaluative data for personal growth, however, requires equal footing. Evidence that the counselor's skill is essential in successful counseling (Lambert, 1989) invites personal responsibility for the examination and judicious focus on one's attributes and skills as they relate to the interaction between the client and the counselor (Sexton & Whiston, 1991).
A competent counselor is reflective and informed. The counselor who uses multiple means of gathering evidence about the process and outcome of counseling will be more able to foster the efficiericy of both (Johnson & Shaha, 1996). Lambert and Cattani-Thompson (1996) admonished counselors to attend to their participation in counseling by obtaining weekly, written feedback from clients before each session. This type of counseling evaluation can be used as action research for the development of counseling expertise.
Whiston (1996) provided a model for accountability as action research, which she described as problem solving through the scientific method. Her method would allow counselors to include this thoughtful process into their daily schedule. Gathering this pertinent information would allow a counselor to identify, among other things, personal strengths and challenges, as well as patterns of difficulties or successes. These practices may easily be part of a systematic evaluation plan.
APPROACHES TO EVALUATION
Definitions
Evaluation is the study and application of procedures to perform an objective and systematic assessment. Just as counselors may choose a theorist to guide their approach to counseling, evaluation professionals have different models to direct their efforts. The consistencies that can be identified in a definition of evaluation include the following points:
1. Evaluation is conducted as a systematic investigation.
2. Evaluation is focused on the merit (or quality) and worth (or value) of an object.
3. Evaluation is designed to gather information needed in order to make reasonable judgments (Mertens, 1998; Muraskin, 1993; Scriven, 1993).
The object of an evaluation may include programs, policies, products, or personnel. Any of these may be studied to help understand what is being offered and how well the services are being delivered. In the above definition, merit refers to information about the quality of what is being studied and worth to the value of that object in relation to a purpose. In using the evaluation process, a person asks, "How well are you doing what you are doing?" (merit) and "Is what you are doing important?" (worth).
Stakeholder is a term in evaluation used to denote the people who make decisions, as well as the people whose lives the program affects. Examples of these include funding agencies, administrators, staff members, and recipients of services (Mertens, 1998). Weiss (1983a, 1983b) explained a stakeholder approach to evaluation that is flexible and responsive to needs as they emerge. The implications for a counseling practice are clear.
Evaluation incorporates sets of activities to accomplish the goals of gathering information. These methods may be used to examine the planning and design of interventions, to monitor the delivery of the programs, and to assess efficiency and effectiveness (Rossi & Freeman, 1993).
Dimensions of an evaluation study are process, outcome, and impact.
1. Process evaluation is undertaken to describe the activities being implemented. A study of implementation documents the extent and nature of what actually happens. Many of the studies related to counseling process can contribute to this type of evaluation of counseling practices.
2. Outcome evaluation is undertaken to study the effects of the intervention.
3. Impact evaluation identifies effects that are longerterm as well as unintentional.
Models
Knowledge from the field of evaluation provides a basis of understanding from which counselors can build a system of assessment into their practices. Chelimsky (1997), Greene (1994), Mertens (1998), and Scriven (1993) described some basic approaches in the evaluation profession. Information about the audience, methods, and questions of each is included below. This overview will summarize their ideas and provide a foundation for the interweaving of these ideas with the practice of counseling.
Accountability
One approach to evaluation is focused on the system being evaluated. The purpose is to determine efficiency and to establish accountability. The goal is to arrive at evaluative conclusions about that operating system to assist decisionmakers. This involves providing information. The preferred methods of conducting this type of evaluation are the use of quantitative data, which is collected by using experimental and quasi-experimental designs.
This type of evaluation addresses whether outcomes have been achieved and whether the outcomes can be attributed to the intervention. The comparative efficiency of the program also may be investigated. Cause-and-effect questions are posed. Questions about efficiency usually concern costs per value received and comparing costs for similar services (Chelimsky, 1997).
Stufflebeam (1983) formulated a model for this objective-based evaluation. The Context, Input, Process, and Product (CIPP) model of evaluation is centered on the following components.
These guidelines are the basic outline for conducting this type of accountability model of evaluation.
Knowledge
Evaluation can be conducted to generate understanding and explanation. The focus is on management and the practicality, quality, and utility aspects of delivery. Methods in this approach are mixed and may include surveys (both structured and unstructured), questionnaires, interviews, and observations. Methods are chosen to match the problem in this type of responsive evaluation. The approach would be useful in determining the components of a program that are working well or that require improvement. Comparing the effectiveness of outcomes with the intended goals would be another focus of this model of evaluation. This type of practical information and eclectic methodology provide a rich description of the object being evaluated.
Guba and Lincoln (1989) explained a constructivist paradigm as fourth-generation evaluation. The efforts are to use the methodology to shared conclusions about the evaluative activities. Their model might guide an evaluation with the purpose of increasing knowledge. Weiss (1997) discussed theory-based evaluation, another useful method.
Improvement
Fetterman (1996, 1997) discussed another approach in the evaluation field: empowerment evaluation. The purpose of empowerment evaluation is to use the concepts, techniques, and information from evaluation to generate improvement. Using quantitative and qualitative methods, the evaluator can use this method to consider individuals, organizations, communities, and programs. This is a circular process of reflection and self-evaluation. Determining the worth and merit of the object becomes a part of a continuous process of program development and improvement. In empowerment evaluation, the dynamic nature of merit and worth is recognized. The steps in empowerment evaluation are:
1. Taking stock. Determine where the program stands, including its strengths and weaknesses.
2. Setting goals. Focus on establishing goals-determining future direction with an emphasis on program improvement.
3. Developing strategies. Develop strategies to accomplish goals.
4. Documenting progress. Document type(s) of credible evidence needed to ascertain progress toward their goals.
This brief overview of definitions and approaches to evaluation represents only a small segment of the considerable knowledge that has been generated in the field of evaluation. The three approaches have considerable overlap and methodological interchange. These points are sufficient to outline potential applications related to gathering information to make careful decisions, to describe a program, and to create a systematic self-assessment. Therefore, based on these summaries, the general purposes to be considered for the evaluation matrix are accountability, knowledge, and improvement.
WAYS TO PROCEED
Mertens (1998) explained steps for an evaluation study that can be modified for this proposal of a manageable evaluation matrix for counseling services. The steps are: focusing the evaluation, planning the evaluation, and implementing the evaluation. Included in the focusing stage are descriptions, purposes, and selection of a model. In the planning stage the evaluator(s) determines specifics about gathering, analyzing, interpreting, and using the information gathered. The evaluator(s) will complete, analyze the findings, and communicate the evaluation results during the implementation stage. The matrix is presented as an aid to the first two stages of an evaluation-focusing and planning. The implementation stage should follow the careful planning of the first two steps.
Focus
As in the first step in the empowerment model of evaluation, taking stock of the current counseling program is an initial concern. The planning may begin by considering the elements that exist within the counseling practice. Those elements will likely fall into four levels: context, process, outcome, or allocations. For the purposes of this article, context encompasses the factors of client variables, as well as the setting and the procedures of the counseling services. Counseling process relates to what happens in the delivery of services. Outcome is the change that occurs as a result of the interventions. Efficiency is determined by determining the costs associated with resources used in the counseling practice.
After determining the components that could possibly be evaluated in each of these four categories, the counselor would begin to narrow the evaluation to a manageable scope. This process should proceed by identifying (a) the important questions, (b) the stakeholders interested in the answers, (c) information that would provide credible evidence for the answers for those stakeholders, and (d) the methods of obtaining that evidence. An example will illustrate these possibilities.
The focusing stage of evaluation is a filtering step in which the evaluator generates many possibilities, prioritizes according to current needs, and then formulates an outline from which a more detailed plan may be developed. Selecting or creating a method of evaluating is next. That determination, in this abbreviated process, would depend on whether the purpose of the evaluation is accountability, knowledge, or development.
The third factor to consider is the process of the evaluation system, which should be analyzed, revised, or redesigned according to the information accumulated. The implementation, testing, and assessment of the evaluation process is continuous, as is the improvement and education that follows this quality-generating process of a manageable evaluation.
Scope
Hiebert (1997) suggested a model for the integration of evaluation into counseling practices. His model encompassed the time frames, stakeholders, evidence, and roles and responsibilities as aspects of the evaluation policy. That process of evaluation is directed to the intervention factors, the agency factors, and dissemination of the results. He explained that an evaluation plan must address several needs. One necessary step is the identification of stakeholders. Another is the determination of the roles and responsibilities of everyone involved in the evaluation. An outline of the evidence to be considered, the worth accorded to the types of evidence, and the timing aspects of the process are other key factors. Each of the factors that Hiebert detailed impacts the clusters to be considered-program factors, agency factors, and communication.
Although not conceptualized according to Hiebert's model, Plante, Couch, and Diaz (1995) furnished a useful explanation of the process they used to measure treatment outcome and client satisfaction of children and families. After carefully reviewing available outcome and satisfaction measures, they chose the Child and Adolescent Adjustment Profile (CAAP), the Brief Psychiatric Rating Scale for Children (BPRS-C), and a modified version of the Client Satisfaction Questionnaire (CSQ-8). They also used a Demographic Questionnaire that they designed.
These instruments constituted a packet of assessment to be completed at different times during therapy. The clinician completed BPRS-C after the second session and after the last session. A parent or guardian filled out the CAAP, a child and adolescent adjustment profile, during several intervals of treatment and at a 6-month follow-up contact. The CSQ was also completed at predetermined times during therapy. These authors discussed their system of evaluation as efficient in providing accurate and up-to-date records, as well as valuable client information.
Dornelas, Correll, Lothstein, Wilber, and Goethe (1996) provided some further guidelines for practitioners who design and implement ongoing evaluation. The minimum data, according to these authors, begins with descriptions of the consumers. Nelson and Neufeldt (1996) also emphasized the need to know the client. Dornelas and her colleagues suggested that other crucial factors to assess are an analysis of what happens and an assessment of outcomes. They continued by noting that combining quantitative and qualitative measures provides a more comprehensive evaluation than either method singularly, and that multiple sources of evidence are more powerful than a single one.
Rossi and Freeman (1993) clustered evaluation classes into conceptualization and design, monitoring and accountability, and assessment of utility. Those authors noted that keeping an evaluation process manageable, efficient, and organized is critical to a successful endeavor. The evaluator determines which factors to evaluate, at what intervals, with what evidence, and from which group. The use of the evidence, the method of collecting, and the format of reporting are other initial decisions (Fairchild & Seeley, 1996; Plante, Couchman, & Diaz, 1995). The proposed matrix that follows is a tool in making those decisions.
Matrix
A combination of these models and suggestions provides a foundation for the matrix, designed as a pattern for a manageable evaluation plan in counseling practices. Assessing all services and involving all parties is an unrealistic undertaking. Nevertheless, targeting different areas within crucial components and staggering the timing of assessments can help a counselor gain significant, useful knowledge. Practitioners may use this planning guide to design and implement evaluation studies.
The factors to be taken into account are the focus of the evaluation, the question and the purpose(s), the evidence and the audience, the source from whom the evidence will come, the method and measure(s) to be used to gather the evidence, and the timing. The following sections of this article will give details about the possible choices for each of the cells in the matrix.
Focus
Four components are identifiable in the delivery of any counseling services. One of those is the context in which the practice occurs. Therefore, one area on which evaluation may be focused is the context. In this schema, the term context designates descriptions of the setting, procedures, and clients of the counseling practice. The second component on which evaluation may focus is the process of providing counseling. The outcome of counseling constitutes the third possible area of focus, and efficiency is the fourth. The following discussion provides more detail about each of these areas and evaluation questions specific to each.
Questions
Shipman, MacColl, Vaurio, and Chennareddy (1995) and Mertens (1998) reported categories of evaluation questions. Those sets have been modified to be relevant for counselors.
Descriptive Questions (Context and Efficiency)
What activities are supported?
For what purpose?
By whom?
What is the scope and cost of the activivies?
Who is reached?
What variations exist?
Implementation Questions (Process and Efficiency)
Are activities being carried out?
Are the activities legal and ethical?
Do the activities conform to professional standards of practice?
Are the resources realistically managed?
Impact Questions (Outcome and Efficiency)
Are the intended outcomes achieved?
How do the outcomes vary across participants?
Are the effects worth the financial and other costs?
What alternate strategies might achieve the same impact?
Four general questions relate to the four focus components are summarized.
Context: What is our mission?
More specific questions related to those four componets are presented next.
Purposes
Evaluation may serve many purposes. Before an evaluation plan is implemented, those purposes must be determined. Mertens (1998) explained the differences in the purposes of evaluation.
1. Formative evaluations provide information that will help in improving the program.
2. Summative evaluations help provide judgments about worth and merit.
3. Developmental evaluations are designed to provide a fluid picture of the dynamics of the practice.
The three general evaluation models reviewed previously can be used to help the counselor devise evaluation systems to guide decisions (accountability), to provide a description (knowledge), and/or to enhance self-development (development). Counselors who begin to integrate a system of evaluation may choose to limit themselves to these three basic purposes initially and expand as their system grows.
Evidence/Audience
This section of the matrix is devoted to decisions related to what evidence would credibly answer the question with careful attention to who needs the information and how it will be attained. Hiebert (1997) urged broadening the range of acceptable evidence from standardized tests to more informal measures. His suggestions are included in the Method section of this discussion. Obviously, the decision about acceptable evidence must be made while considering the audience that will examine it. Therefore, these elements are paired. Audiences or stakeholders may include sponsors, governing and advisory boards, agency representatives, policymakers, various levels of administrators, staff members, recipients of the services, related significant others of the recipients, and community representatives (Mertens, 1998).
Decisions in this step of evaluation are guided by careful attention to what information would convince this audience. Examples of these choices may include evidence about client change, costs, and counselor effectiveness.
Source
This proposed matrix includes a model using multiple sources of information. Minimally, those sources are the counselor, the client, and a significantly related other (such as a spouse, teacher, or parent). Information from others also may be solicited to strengthen the findings. Multiple sources provide more complete information, and each measure represents different types of evaluative information (Lambert, Masters, & Ogles, 1991; Sexton, 1996).
Method
The process and measures to be used to gather evidence is included in this part of the plan. Some informal methods that Hiebert (1997) suggested are checklists, portfolios, observation forms, cognitive mapping, self-monitored data, and performance assessment. More information about the possibilities follows in the discussion of process and outcome. A measurement such as a standardized instrument is a summary that necessarily omits complexity in favor of simple common elements. Sonnanburg (1996) developed a system of translating complex behavior into the observable patterns of speech, motor movements, physiological response indicators, and the relationship between behavior and the context. His classifications may prove helpful to counselors who are developing a system of evaluation.
Rossi and Freeman (1993) explained four other ways of gathering evidence: observation, records, staff, and clients.
1. Observation methods require a systemic approach. Three ways of accomplishing this are the narrative method, a data guide, and a structured rating scheme. In the narrative method the observer records events with as much detail and sequence as possible. A list of activities to be watched is often provided. Giving the observers a set of questions to answer is a more structured approach to the observation method. A structured rating scale also may be used to guide observations. Observers may be handed a checklist to record time spent on activities or a scale of interactions.
2. Recorded data may be helpful in assessing the characteristics of a counseling program. Consistent gathering of data leads to reliable sets of information. Structuring record formats as a checklist whenever possible will strengthen the opportunities for gathering useful information from records that already are being compiled. An information-management system that is computer-based allows the accumulation and display of data in a variety of ways.
3. The staff may provide information by keeping logs, providing narrative reports, and completing rating forms or surveys.
4. Clients may be asked about their level of satisfaction and about the services they received. Investigating whether clients understand the treatment is also important.
The investigator then will need to analyze the evidence that has been accumulated by observation, records, staff, and clients. The analyses will provide a description of the program and a comparison of what is being offered to the original design. The description of the types of service being delivered and the reactions to the people receiving them can be assessed.
Timing The timing component of the matrix indicates when the evidence is to be gathered. This may be at an intake session, during a follow-up interview, after every third meeting, or any other variation on timing.
Analysis
Gathering the evidence in a thoughtful manner is not sufficient. The evaluation plan also must include analyzing and communicating the results. The analysis plan will depend on the purpose for the evaluation and the type of evidence that has been gathered. Counselors may choose an informal method of examining the data or a more sophisticated quantitative or qualitative analysis. Decisions about analysis and communication will follow logically from the previous choices. This article will omit the field of analysis and concentrate on the components related to planning an evaluation system. The reader is cautioned, however, not to reduce the impact of the evaluation efforts by overlooking the importance of the analysis stage. Table 1 presents the evaluation matrix.
Planning the Evaluation
Details related to each of the four foci of evaluation are presented here. Each section will include an expanded definition of the term, possible evaluation questions and purposes, components that could be considered, and measures that have been used for assessment. This portion of the article will furnish counselors with several categories and classifications from which to choose in designing a manageable evaluation plan for a counseling practice.
Context
Definition. Counseling services exist within a system of delivery. Some occur in schools, some in agencies, some in private practices, and some in other settings. All counseling services include some contextual features particular to that practice, as well as a sequence of procedures for the delivery of services. Wholey (1994) suggested that delivery systems are a series of actions undertaken to provide an intervention.
Components. Components of delivery systems consist of many elements, such as the clients, procedures, needs assessments, management, client pathway, and information system (Rossi & Freeman, 1993). Counselors may choose any or all of these elements to assess. The components suggested by Johnson and Shaha (1996) are a framework for taking stock of the variables in evaluation planning. Those include the reason for the practice, the customers and the process, and improving the process. Considering the reason for the existence of the counseling practice involves defining the customers and the stakeholders, as well as studying the mission statement and organization values. Other considerations include identifying and prioritizing the consumer's needs and expectations. Process improvement includes analyzing the current process and revising or redesigning based on the evidence collected.
Questions. Questions that guide this initial focus of evaluation are
What is our mission?
Who are our customers?
Who are the stakeholders in the delivery of our counseling services?
What do we value as an organization?
Evidence. The arrangements in the facilities and systems in place that allow the interventions to be available have to be described. Facilities may be defined as the offices, staff, reception area, and any other parts related to physical surroundings. Systems are the processes of reporting on what has occurred and the functions in place for the clients from the first time they enter until they terminate. Some examples of these systems are intake forms, sign-in forms, referral systems, and case-management procedures. Descriptions of the context evaluation may encompass whether clients return for appointments, whether facilities are adequate, how clients and others are greeted and led through the intake, as well as the assessment, treatment, and termination phases of counseling.
Specification of services consists of determining the actual services that are provided. Program elements may be defined as time, costs, procedures, or products. These elements, simple or complex, should be explained in terms of concrete activities and actions. The contextual features include several variables. Considering the number of clients served, the size of staff, and assets of the organization are factors for assessing the size of the organization. Structure of the organization involves the divisions and departments, as well as lines of authority and responsibility. Organizational resources are budget, skilled staff, and financial position. Any of these factors may be used as a component of the context, as well as evidence to elucidate an evaluation of the context.
Process
Definition. Process refers to what happens during a counseling session. Process focuses on operations and implementation of services.
Components. Sexton and Whiston (1991) argued that establishing a relationship, structuring a counseling session, and choosing interventions are the basic to client change. Sexton, Schofield, and Whiston (1997) expanded the components of those components in their discussion of evidence-based practices. Lambert and Bergin (1994) reviewed empirical research and concluded the common factors that are associated with positive therapy outcomes can be grouped into support factors, learning factors, and action factors. These factors seem to operate most powerfully within counseling sessions by helping the counselor and client build a working alliance. The client gains a sense of trust, security and safety, while tension, threat, and anxiety decrease. The client may then begin to understand problems differently and change behavior by reframing fears, taking risks, and working on interpersonal problems. These authors have provided a useful summary of the counseling process.
Sexton, Schofield, and Whiston (1997) discussed the use of evidence-based protocols that are effective with certain client problems. These protocols also may provide components for assessing process.
Questions. Process-level evaluation addresses questions such as:
Did we accomplish our objectives?
Who is served? What are the relevant characteristics of the consumers? What problems are the most common?
How many people are being seen? How often and how long?
How do the people come to us? How many continue to come, and how many drop out?
How are the clients feeling about the interventions?
What do they like or dislike? What suggestions do they have?
Evidence. Goldfried (1980), Frank (1982), and Hill and Corbett (1993) have suggested the following as strategies common to all approaches to therapies:
inspiring and maintaining hope
arousing emotions
providing opportunities for new learning
enhancing a sense of mastery or self efficacy
constructing opportunities for the internalization and maintenance of therapeutic gains
developing the therapeutic relationship
obtaining a new perspective
Froehle and Rominger (1993) outlined criteria for evaluating consultation research. Some are applicable to evaluating counseling process. The criteria include the following.
1. Treatment effectiveness examines the amount of improvement after the beginning of treatment.
2. Treatment integrity refers to the service being implemented as prescribed.
3. Social validity is the consumer's evaluation of the treatment being significant and relevant.
4. Treatment acceptability refers to the consumer satisfaction with the intervention.
Drozd and Goldfried (1996) recognized that researchers concerned with the process of counseling currently focus on how clients change during treatments. Identifying change mechanisms is the research goal. Goldfried and Padawer (1982) and Drozd and Goldfried (1996) identified common change principles.
1. Renewal of the client's sense of hope. This involves the counselor interrupting the sense of discouragement with which clients often begin (Prochaska, 1979). 2. The interaction between counselor and client. One predictor of change is a therapeutic relationship that is warm and supportive and in which the client is engaged (Beutler, Machado, & Neufeld, 1994).
3. Expanding clients' awareness of themselves and of the world surrounding them. Encouraging clients to behave in alternative ways that challenge old patterns is identified as a principle of change. Drozd & Goldfried (1996) suggest that corrective experiencing is as important to a client's improvement as any other variable.
Evidence related to any of these change variables or to the strategies common to all therapies or to the four criteria may serve in evaluating the process focus of evaluation. The extent to which practice adheres to standards-based performance evaluations or "best practices" is another method to evaluate counseling process (Steenbarger & Smith, 1996). This type of evaluation would document how closely the delivered services adhered to recognized standards of care. These standards can be defined flexibly and implemented according to professional needs. Steenbarger and Smith provided more details in their article regarding the assessment of counseling quality.
Any of these summary statements of the components of counseling process could be used as an emphasis for the evaluation study.
Methods/Measures. Orlinsky (1994) defined six categories of process that might prove helpful in determining what aspect of process to evaluate:
1. The formal aspect: the contractual provisions such as the therapeutic situation, roles of counselor and client, and others.
2. The technical aspect: operations such as the presenting problem, the case conceptualization, and the treatment plan.
3. The interpersonal aspect: the alliance between counselor and client.
4. The intrapersonal aspect: self-relatedness and experience of self in therapy.
5. The clinical aspect: the in-session impact of the interactions on the client's insight and self-understanding and the therapist's self-efficacy.
6. The temporal aspect: interactions that occur over time and events that characterize the overall treatment.
Some of the behaviors that may be used to assess the process of counseling include nonverbal behavior, verbal responses, client experiencing, therapist intentions and client reaction, content, strategies, interpersonal manner, and the therapeutic relationship (Hill, 1991). Hutchinson (1997) argued that qualitative methods of evaluation have a crucial part in assessing counseling process. Nelson & Neufeldt (1996) suggested that the counselor's inquiry into the therapy process was imperative to enhancing the relationship.
Hill, Nutt, and Jackson (1994) reviewed the process research that had been published in the Journal of Counseling Psychology and the Journal of Consulting and Clinical Psychology between 1978 and 1992. They identified measures that have been used in process research. Some of those are listed below. A more complete review can be found in the works of Greenberg and Pinsof (1986) and Russell (1987).
Hill, Nutt, and Jackson (1994) recognized the Session Evaluation Questionnaire, the Counseling Evaluation Inventory, and the Client Satisfaction Questionnaires as well developed measures of session or intervention evaluation. These suggestions allow choices for measures to address evaluation of the counseling process.
Outcome
Definition. Outcome refers to the results of counseling. The issue is whether the client changed as a result of counseling, and how much change occurred. Those changes may occur immediately, as a result of a counseling event, a session outcome, or the overall treatment outcome. The first two may be considered the process level and the other two as change resulting from treatment. Ideally, the change would be measured from the perspectives of the counselor, the client, and an important other (Hill, 1991).
Sexton, Whiston, Bleuer, and Walz (1997) discussed many ways of incorporating the research on outcomes into a counseling practice. Their insights and processes make valuable contributions to providing services; adding an evaluation step would further enhance a counseling practice.
Components. Oliver and Spokane (1988) summarized the outcomes of career interventions. Based on their study of studies, the variables they identified as evidence of outcomes are career decision making, effective role function, and counseling evaluation. Some of the descriptors are appropriate in other counseling situations. For example, decision making includes the aspects of accurate selfknowledge, realistic choice, information-seeking skills, attitude toward choice, and satisfaction with decision.
Outcome evaluation measures may also include a change in the symptom or problem, a general adjustment or behavior change, and/or client satisfaction with counseling. As noted previously, the indications of change may be based on the perceptions of the client, the counselor, or an outsider. Sonnanburg (1996) considered variables that integrate to produce outcomes in therapy. He proposed that these "elemental features" would suggest a pathway to understanding the effects of therapy.
Rapport denotes the factors that contribute to the duration of the relationship. Assessments by the clinician and the consumer involve the participants' understanding of shared events. The rationale is the explanation for the therapy that leads to an agreement on goals, or the therapeutic contract.
As the sessions continue, the counselor and the client perform procedures and then review what happens. Generalization of the benefits of counseling in other settings is another functional element. Finally, termination is the transition as therapy concludes.
Other frequent assessments are of the functional status of the client and their symptoms. Functional status implies the extent to which the problem hinders the client's day-to-day functioning. Any of these elements may serve as components to be evaluated.
Questions. Outcome evaluation questions focus on the effects of the intervention:
Did accomplishing the objective help achieve the goal? Has the consumer changed in any way?
How have the clients changed? How have behaviors, thoughts, and emotions been altered?
Has the client increased knowledge? Changed attitudes? Improved at work or school, with families, with colleagues?
Methods/Measures. Assessing outcome requires a valid, reliable, and feasible-simple and inexpensive-measurement (Johnson & Shaha, 1996). A useful categorization of outcome measures includes four dimensions (Lambert, Ogles, & Masters, 1992) that help determine the usefulness of outcome measures:
1. Content that includes affect, behavior, cognition, interpersonal, and social role. This dimension is structured to emphasize assessing the intrapersonal (emotions, behaviors, and thoughts) change, shifts in interpersonal relationships, and the individual's contribution to society.
2. The source of information. This includes selfreports, counselor ratings, trained observers, relevant others, or institutional sources.
3. Methods used to collect the data.. Those categories include evaluation, description, observation, or status. Evaluation asks the rater to report posttreatment assessment of effectiveness of counseling and/or change in symptoms. Descriptive measures ask for reports on an attribute before and after the intervention. The descriptions then are compared to estimate the occurrence and degree of change. Observation methods involve ratings of behavior by trained observers, relevant others, or the counselor. Status methods report the state of client characteristics such as heart rate or marital status.
4. The time orientation of the instrument. This dimension includes the extent to which the assessment is focused on a stable, traitlike characteristic versus an unstable, statelike characteristic. Evaluating outcome instruments on these categories may provide a means to determine which measures are most helpful for specific situations.
Elliott (1992) revised the classification schema by including some features he deemed essential: a client focus, assessment subsequent to treatment, some standard of function, comparative analysis, and from a particular point of view. His schema may also be helpful in choosing process and outcome assessment instruments.
Other client changes relate to function and adjustment. Related variables include maturity, self-adjustment, congruence, interpersonal competence, attitude change, locus of control, cognitive complexity, and a decrease in anxiety. Jacobson and Truax (1991) described a "reliable change index" that can be used to determine clinically significant change in a given client. The system they proposed is based on the movement of a client from dysfunction to a range of function. Their formula may be important in determining the effectiveness of treatment and in determining a threshold of acceptable progress.
Eisen and Dickey (1996) suggested additional means of assessing outcome. Profiling, report cards, and instrument panels are tools to be considered. Profiling compares the practice patterns' two groups with respect to client outcomes, accessibility to care, utilization of services, and satisfaction of care. Standards and guidelines based on expert opinion, published research, and statistical norms are the criteria used.
Appropriate domains for profiling techniques are functional status, quality of life, client well-being, and satisfaction with treatment. Instrument panels are used as a gauge on performance based on immediate feedback. Benchmarking is a reference point used to measure quality or worth. Benchmarking involves identifying a process, collecting data that describe performance of the process, determining the strongest performances, and adopting factors responsible for skillful performance (Campbell, 1994).
Client satisfaction measures assess the extent to which the consumer believes the services received have been convenient and useful. The Client Satisfaction Questionnaire-8 (CSQ-8) is a general measure that has eight items to measure general approval of services. The Service Satisfaction Scale-30 (SSS-30) has 30 items focusing on the counselor's manner and skill, the office procedures and access, and the perceived outcome. Both of these are brief and easy to score (Steenbarger & Smith, 1996).
Lambert and Hill (1994) and Sexton (1996) reported the following as commonly used inventories and methods of assessment in counseling outcome:
Efficiency
Definition. Efficiency evaluation focuses on allocation of resources. Ranier (1996) concluded that mental health care is affordable and cost-efficient in terms of dollar, workplace, physical wellness, and quality of life benefits. The evaluation of costs as related to benefits would help document those claims.
Components. This type of evaluation is designed to measure efficiency by relating costs to results. Scriven (1993) maintained that efficiency could not be determined without cost analysis. The relationship between cost and success is reported by the methods of cost-benefit analysis and cost-effectiveness analysis.
Cost-benefit analysis refers to the measure of both the costs and benefits on a monetary basis. That common base allows comparisons. Cost-benefit analysis requires that the costs and benefits be known and transformed to a common measurement unit. The difficulty arises in placing an economic monetary value on some of the benefits of counseling services. For instance, the benefits noted above by Ranier (1996) are workplace, physical wellness, and quality of life-all present difficult monetary conversions.
Yates (1998) explained that outcomes can be measured as costs offsets or as benefits produced. Costs offsets are monies that do not have to be spent because of what has been accomplished. An example would be a decrease in unemployment benefits for a person who had become employed as a result of a counseling intervention. Benefits produced are resources generated, such as net income.
Cost-effectiveness analysis compares the monetary cost to units of gains that are expressed in other terms (e.g., days at work, academic achievement). Some typical measures are improved functioning and quality-adjusted life years added (Yates, 1998). Yates suggested that ratios of costs divided by effectiveness of things such as cost per life saved, cost per drug abuser prevented, or cost per pound of weight lost are meaningful. Positive things made more likely and negative events made less likely are also measures of effectiveness.
Cost-benefit analysis and cost-effectiveness analysis can be used to compare the costs and the outcomes separately or combined to form ratios. Yates (1998) incorporated procedure and process into a formula he designed to improve the delivery of services as well as to learn more about the ingredients of the program. Andrieu (1977), Levin (1981, 1987), and Rossi & Freeman (1993) offer more detailed information about this highly technical procedure.
Questions. The questions that evaluating allocations may answer are
Did we use our resources wisely?
Was this intervention the best way to achieve the result?
What is the comparison of cost(s) to benefit(s)?
Methods/Measures. Yates (1998) explained the four sources of information for efficiency evaluation.
1. Costs are the resources used. That would include personnel, space, equipment, office supplies, communications, and financial services. He suggested that costs are the most commonly hidden criteria in evaluation.
2. Specific procedures of the treatment are the activities of the intervention, directly observable and measurable. Examples of the way procedures may be measured are by recording the intervention for the consumer or by reporting the number of sessions attended and the extent to which the client participated. This tracking involves careful record keeping.
3. Processes refer to what happens or changes inside the client, the psychological and biological outcomes such as self-statements, attributions, beliefs, and self-regulating strategies.
4. Outcomes are the short-term and long-term results of the treatment. Orlinsky, Grawe, and Parks (1994) explained outcome as the client's life becoming better in some way from someone's point of view.
Examples of such aa change are an improvement in the social functioning of the client and a healthier physical and mental life.
Rossi and Freeman (1993) explained the three accounting perspectives in cost-benefit and cost-effectiveness analysis. Each perspective will yield different results.
1. The individuals receiving the treatment(s).
2. The monetary sponsors.
3. The community or society.
The accounting perspective chosen will depend on the audience for the evaluation.
Summary
By choosing from the information presented above, counselors can develop an evaluation system that addresses the criteria that have been identified as integral in the counseling enterprise. Gathering evidence related to those informed choices will yield serviceable information for accountability, knowledge, or development in a counseling practice.
CONCLUSION
A joint committee initiated by collaboration of the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education developed standards to guide the evaluation of educational and training programs. These Program Evaluation Standards (1994) provide criteria for counselors to judge their evaluation plan.
The four attributes around which the standards were organized-utility, feasibility, propriety and accuracyserve as a checklist: The counselor can assess the evaluation system that is planned by checking on whether the information gathered will be beneficial to the people who will use it-utility. The process to be conducted should be realistic, prudent, diplomatic, and frugal-feasibility. Propriety addresses the legal and ethical regard for those who are involved in and affected by the evaluation. Finally, communication of the results should be accurate. Comparing the evaluation plan against these four standards will allow the counselor to ensure the integrity of the evaluation system.
Steenbarger, Smith, and Budman (1996) hypothesized that the current system of managed care is a transition from the decentralized practices of the past to an evolving system of learning organizations. That type of learning organization will devote efforts to documenting the quality and efficiency of its practice by incorporating data collection and utilization, thus "becoming increasingly businesslike and information-centered" (p. 250). The databases of accumulated evidence would allow increasingly complex understandings to guide the counseling practice and the care of the consumers (Steenbarger & Smith, 1996).
Hosie (1994) has recognized program evaluation as a potential area of expertise for counselors. Perhaps this expansion also can be served by the blueprint this matrix provides. Use of the matrix proposed herein and the choices related to the four foci could simplify the evaluation process. Counselors interested in becoming more purposeful about accumulating evidence about the context of their practice, as well as about the process, outcome, and efficiency of the counseling in which they are engaged, will be able to use this information to design a manageable evaluation system.
REFERENCES
Andrieu, M. (1977). Benefit cost evaluation. In L. Rutman (Ed.), Evaluation research methods: A basic guide (pp. 219-232). Newbury Park, CA.
Berwick, D. M. (1989). Continuous improvement as an ideal in health care. New England Journal of Medicine, 320, 53-56.
Beutler, L. E., Machado, P. P. P., & Neufeldt, S. A. (1994). Therapist variables. In A. E. Bergin and S. L. Garfield (Eds.), Handbook of psychotherapy and behavior change (4th ed., pp. 229-269). New York: John Wiley.
Campbell, A. (1994). Benchmarking: A performance intervention tool. Journal of Quality Improvement, 20, 225-228.
Chelimsky, E. (1997). The coming transformations in evaluation. In E. Chelimsky & W.R. Shadish (Eds.), Evaluation for the 21 st century (pp. 1-26). Newbury Park, CA: Sage.
Chowanec, G. D. (1994). Continuous quality improvement: Conceptual foundations and application to mental health care. Hospital and Community Psychiatry, 45, 789-793.
Crosby, P. B. (1979). Quality is free. New York: McGrawHill.
Deming, W. E. (1986). Out of the crisis. Cambridge: Massachusetts Institute of Technology.
Dornelas, E. A., Correll, R. E., Lothstein, L., Wilber, C., & Goethe, J. W. (1996). Designing and implementing outcome evaluations: Some guidelines for practitioners. Psychotherapy, 33 (2), 237-245.
Drozd, J. F., & Goldfried, M. R. (1996). A critical evaluation of the state-of-the-art in psychotherapy outcome research. Psychotherapy, 33 (2), 171-180.
Eisen, S. V., & Dickey, B. (1996). Mental health outcome assessment: The new agenda. Psychotherapy, 33 (2), 181-189.
Elliott, R. (1992). A conceptual analysis of Lambert, Ogles, and Masters's conceptual scheme for outcome assessment. Journal of Counseling & Development, 70, 535-537.
Fairchild, T. N., & Seeley, T. J. (1996). Evaluation of school psychological service: A case illustration. Psychology in the Schools (33), 4-55.
Feigenbaum, A. (1991). Total quality control. New York: McGraw-Hill.
Fetterman, D. M. (1996). Empowerment evaluation: An introduction to theory and practice. In D.M. Fetterman, S. J. Kaftarian, & A. Wandersman (Eds.), Empowerment evaluation: Knowledge and tools for self-assessment & accountability (pp. 348). Newbury Park, CA: Sage.
Fetterman, D. M. (1997). Empowerment evaluation and accreditation in higher education. In E. Chelimsky & W.R. Shadish (Eds.), Evaluation for the 21 st century (pp. 381-395). Newbury Park, CA: Sage.
Frank, J. D. (1982). Therapeutic components shared by all psychotherapies. In J. H. Harvey & M. M. Parks (Eds.) The master lecture series: Vol. L Psychotherapy research and behavior change. Washington, DC: American Psychological Association.
Froehle, T. C., & Rominger III, R. L. (1993). Directions in consultation research: Bridging the gap between science and practice. Journal of Counseling & Development, 71, 693-699.
Goldfried, M. R. (1980). Toward the delineation of therapeutic change principles. American Psychologist, 35, 991-999.
Goldfried, M. R., & Padawer, W. (1982). Current state and future directions in psychotherapy. In M. R. Goldfried (Ed.), Converging themes in psychotherapy (pp. 3-49). New York: Springer.
Granello, P. F., & Granello, D. H. (1998). Training counseling students to use outcome research. Counselor Education & Supervision, 37(4), 224-237.
Greenberg, L., & Pinsof, W. (Eds.). (1986). The psychotherapeutic process: A research handbook. New York: Guilford.
Greene, J. C. (1994). Qualitative program evaluation: Practice and promise. In N. K. Denzin & Y S. Lincoln (Eds.), Handbook of qualitative research. Newbury Park, CA: Sage.
Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage.
Hadley, R. G., & Mitchell, L.K. (1995). Counseling research and program evaluation. Pacific Grove, CA: Brooks/Cole.
Hiebert, B. (1997). Integrating evaluation into counselling practice: Accountability and evaluation intertwined. Canadian Journal of Counselling, 31(2), 112-126.
Hill, C. E. (1991). Almost everything you ever wanted to know about how to do process research on counseling and psychotherapy but didn't know who to ask. In C. E. Watkins, Jr. & L. J. Schneider (Eds.), Research in Counseling. Hillsdale, NJ: Lawrence Erlbaum Associates.
Hill, C. E., & Corbett, M. M. (1993). A perspective on the history of process and outcome research in counseling psychology. Journal of Counseling Psychology, 40(1), 3-24.
Hill, C. E., Nutt, E. A., & Jackson, S. (1994). Trends in psychotherapy process research: Samples, measures, researchers, and classic publications. Journal of Counseling Psychology, 41(3), 364-377.
Hosie, T. W. (1994). Program evaluation: A potential area of expertise for counselors. Counselor Education & Supervision, 33, 349-355.
Hutchinson, N. L. (1997). Unbolting evaluation: Putting it into the workings and into the research agenda for counselling. Canadian Journal of Counselling, 31(2), 127-131.
Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting & Clinical Psychology, 59(1), 12-19.
JCAHO (Joint Commission on Accreditation of Healthcare Organizations. (1991). An introduction to quality improvement in health care. Oakbrook Terrace, IL: Author.
Johnson, L. D., & Shaha, S. (1996). Improving quality in psychotherapy. Psychotherapy, 33(2), 225-236.
Juran, J. (1993). Quality planning and analysis. New York: McGraw-Hill.
Lambert, M. J. (1989). The individual therapist's contribution to psychotherapy process and outcome. Clinical Psychology Review, 9, 469-485.
Lambert, M. J., & Bergin, A. E. (1994). The effectiveness of psychotherapy. In A. E. Bergin & S. L. Garfield (Eds.), Handbook of psychotherapy and behavior change (4th ed., pp. 143-189). New York: Wiley. Lambert, M. J., & Cattani-Thompson, K. (1996). Current findings regarding the effectiveness of counseling: Implications for practice. Journal of Counseling & Development, 74, 601-608.
Lambert, M. J., & Hill, C. E. (1994). Assessing psychotherapy outcomes and processes. In A. E. Bergin & S. L. Garfield (Eds.), Handbook of psychotherapy and behavior change (4th ed., pp. 72-113). New York: John Wiley & Sons.
Lambert, M. J., Masters, K. S., & Ogles, B. M. (1991). Outcome research in counseling. In C. E. Watkins, Jr. & L. J. Schneider (Eds.), Research in Counseling. Hillsdale, NJ: Lawrence Erlbaum Associates.
Lambert, M. J., Ogles, B. M., & Masters, K. S. (1992). Choosing outcome assessment devices: An organizational and conceptual scheme. Journal of Counseling & Development, 70, 527-537.
Levin, H. M. (1981). Cost analysis. In N. L. Smith (Ed.), New techniques for evaluation. Newbury Park, CA: Sage.
Levin, H. M. (1987). Cost-benefit and cost-effectiveness analyses. In D. S. Cordray, H. S. Bloom, & R. J. Light (Eds.), Evaluation practice in review. San Francisco: Jossey-Bass, pp. 83-99.
Lewis, J. A., & Lewis, M. D. (1991). Management of human
service programs (2d ed.). Monterey, CA: Brooks-Cole. Lyons, J. S., Howard, K. I., O'Mahoney, M. T., & Lish, J. D. (1997). The measurement and management of clinical outcomes in mental health. New York: John Wiley. Marten, P. A., & Heimbert, R. G. (1995). Toward an integration of independent practice and clinical research. Professional Psychology: Research and Practice, 26, 48-53.
Mertens, D. M. (1998). Research methods in education and psychology. Newbury Park, CA: Sage. Muraskin, L. D. (1993). Understanding evaluation: The way to better prevention programs. Washington, DC: U. S. Department of Education.
Nelson, M. L. & Neufeldt, S. A. (1996). Building on an empirical foundation: Strategies to enhance good practice. Journal of Counseling & Development, 74, 609-615.
Oliver, L. W., & Spokane, A. R. (1988). Career-intervention outcome: What contributes to client gain? Journal of Counseling Psychology, 35(4), 447-462. Orlinsky, D. E. (1994). Research-based knowledge as the emergent foundation for clinical practice in psychotherapy. In P. R. Talley, H. H. Strupp, and S. F. Butler (Eds.), Psychotherapy research and practice (pp. 99-123). New York: Basic Books.
Orlinsky, D. E., Grawe, K., & Parks, B. K. (1994). Process and outcome in psychotherapy-noch einmal. In A. E. Bergin & S. L. Garfield (Eds.). Handbook of psychotherapy and behavior change (4th ed., pp. 270-376). New York: John Wiley & Sons.
Peters, T., & Waterman, R. (1982). In search of excellence. New York: Harper & Row.
Plante, T. G., Couchman, C. E., & Diaz, A. R. (1995). Measuring treatment outcome and client satisfaction among children and families. Journal of Mental Health Administration, 22 (3), 261-269.
Prochaska, J. O. (1979). Systems of psychotherapy: A trans
theoretical analysis. Homewood, IL: Dorsey. Program Evaluation Standards. (1994). URL http:l/www. eval.org/EvaluationDocuments/progeval.html.
Rainer, J. P. (1996). The pragmatic relevance and methodological concerns of psychotherapy outcome research related to cost effectiveness and cost offset in the emerging health care environment. Psychotherapy, 33(2), 216-224.
Rossi, P. H., & Freeman, H. E. (1993). Evaluation: A systematic approach (5th ed.). Newbury Park, CA: Sage. Russell, R. L. (1987). Language in psychotherapy: Strategies of discovery. New York: Plenum. Schmoker, M. (1996). Results: The key to continuous school improvement. Alexandria, VA: Association for Supervision and Curriculum Development.
Scriven, M. (1993). Hard-won lessons in program evaluation. San Francisco: Jossey-Bass.
Senge, P. M. (1990). The fifth discipline: The art and practice of the learning organization. New York: Doubleday. Sexton, T. L. (1996). The relevance of counseling outcome research: Current trends and practical implications. Journal of Counseling & Development, 74, 590-600. Sexton, T. L., Schofield, T. L., & Whiston, S. C. (1997). Evidence-based practice: A pragmatic model to unify counseling. Counseling & Human Development, 30(4). Sexton, T. L., & Whiston, S. C. (1991). A review of the empirical basis for counseling: Implications for practice and training. Counselor Education & Supervision, 30, 330-354.
Sexton, T. L., Whiston, S. C., Bleuer, J. C., & Walz, G. R. (1997). Integrating outcome research into counseling practice and training. Alexandria, VA: American Counseling Association.
Shipman, S., MacColl, G. S., Vaurio, E., & Chennareddy, V. (1995). Program evaluation: Improving the flow of information to the Congress. Washington, DC: U. S. General Accounting Office.
Sonnanburg, K. (1996). Meaningful measurement in psychotherapy. Psychotherapy, 33(2), 16(170.
Steenbarger, B. N. & Smith, H. B. (1996). Assessing the quality of counseling services: Developing accountable helping systems. Journal of Counseling & Development, 75, 145-150.
Steenbarger, B. N., Smith, H. B., & Budman, S. H. (1996). Integrating science and practice in outcomes assessment: A bolder model for a managed era. Psychotherapy, 33(2), 246-253.
Stufflebeam, D. L. (1983). The CIPP model for program evaluation. In G. F. Madaus, M. Scriven, & D. L. Stufflebeam (Eds.), Evaluation models (pp. 117-142). Boston: Kluwer-Nijhoff.
Weiss, C. H. (1983). The stakeholder approach to evaluation: Origins and promise. In D. S. Cordray, H. S. Bloom, & R. J. Light (Eds.), Evaluation practice in review (pp. 23-43). San Francisco: Jossey-Bass. Weiss, C. H. (1997). Theory-based evaluation: Past, present, and future. In D. J. Rog & D. Fournier (Eds.), Progress and future directions in evaluation: Perspectives on theory, practice, and methods (pp. 41-55). San Francisco: Jossey-Bass.
Whiston, S. C. (1996). Accountability through action research: Research methods for practitioners. Journal of Counseling & Development, 74, 616-623. Wholey, J. S. (1994). Assessing the feasibility and likely usefulness of evaluation. In J. S. Wholey, H. P Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation. San Francisco: Jossey-Bass. Winch, C. (1996). Quality and education. Oxford, England: Blackwell Publishers.
Yates, B. T. (1998). Formative evaluation of costs, cost effectiveness, and cost benefit: Toward cost-procedure-process--outcome analysis. In L. Bickman & D. J. Rog (Eds.), Handbook of applied social research methods. Newbury Park, CA: Sage.
Donna Henderson is an assistant professor in Counselor education at Wake Forest University.
Copyright Love Publishing Company Dec 1998
Provided by ProQuest Information and Learning Company. All rights Reserved