Evaluating the strengths and weaknesses of group decision-making processes: A competing values approach
Wright, Bradley EIdeally, meeting evaluations should enable a facilitator to diagnose a group's strengths and weaknesses and select appropriate interventions to help the group improve it's effectiveness. John Rohrbaugh and Bradley Wright critique various approaches to the evaluation of group decision making and suggest that evaluations should focus on processes rather than outcomes, address the group rather than individual roles and behaviors, and view the group in organizational context rather than in isolation. Building on the Competing Values Approach (CVA) to organizational analysis, they describe four perspectives on group decision processes: empirical, rational, political, and consensual. They present a case in which a validated evaluation instrument, based on the CVA, was used to gain insight into the decision-making processes of an executive team.
Sandor Schuman
While the reliance on team management by organizations has increased (Bennis & Biederman, 1997; Dyer, 1995; Schrage, 1995; Schwarz, 1994), so has the number of interventions that have been developed and promoted to improve group decision making with the intention of achieving ever greater group effectiveness and performance (Bostrom, Watson, & Kinney, 1992; Coleman & Khanna, 1995; Kleindorfer, Kunreuther, & Schoemaker, 1993; Morecroft & Sterman, 1994; Van Gundy, 1988). Of course, no single tool or technique will prove best under all circumstances. The intervention most suited to the needs of a particular group depends on a variety of situational factors such as group size and composition, task characteristics, available time, and organizational resources.
Facilitators (and managers) must be able to differentiate between and choose from a wide variety of group decision-making procedures, only a few of which may be suitable for a particular administrative committee, management task force, expert commission, or executive team. How do they assess the special needs of the group? Methods are required to evaluate the strengths and weaknesses of a group's usual or routine decision-making process to better inform the selection of an appropriate intervention. Those same methods might be used to assess whether the group intervention produces the intended results. Evaluation is critical for identifying the needs of the group, selecting an appropriate intervention, and assessing how well it worked to improve group decision-making.
This article offers an evaluation approach that can guide the selection of methods to improve group decision-making. First, a review of group evaluation methods suggests how many of them fail to provide a comprehensive conceptual framework. In light of the identified weaknesses, an alternative framework for evaluating and improving group decision-making effectiveness is presented. An application of this framework and a relevant diagnostic instrument are then illustrated, followed by a discussion of the potential implications for organization and group development.
Evaluating Group Decision Processes
The evaluation of group decision process effectiveness is not new, but, as a review of the literature illustrates, it has been characterized over the years by three fundamental weaknesses. First, decision processes typically have been assessed on the basis of subsequent outcomes rather than characteristics of the process itself (McGrath, 1984; Rohrbaugh, 1989). While the value of a group's decision can be measured by its results over time, it is almost impossible to ascertain whether a particular type of decision process actually led to that outcome or whether a different process would have led to a better (or worse) one. Without carefully controlled research designs that would allow the isolation of all of the variables that could affect outcomes, it would be foolish to assume that good outcomes necessarily follow from every good decision process or that bad outcomes only result from bad processes. Therefore, any assessment of the effectiveness of a group decision process requires directing primary attention to the process itself, not to subsequent outcomes.
Second, even when group research has been undertaken from a process approach, virtually all investigation appears to be descriptive rather than evaluative, with primary attention being given to individual behavior rather than to collective performance (Zander, 1979). Such studies typically depend on some method of content coding the remarks of each participant (Bales & Cohen, 1979; Sillars, Coletti &Rogers, 1982; Sims & Manz, 1984) using the encoded information to describe the effects of individual roles and behaviors on group processes. Rarely, however, have these data been used to draw conclusions about the performance of the whole membership as a single unit of analysis, that is, to assess the effectiveness of the group decision process.
Third, the evaluation of group decision-making processes often has treated group performance as if it were an end in itself. Janis and Mann (1977), for example, evaluated the effectiveness of group decision-making according to seven steps that they termed "vigilant information processing." This approach, based on a rational choice model, emphasized the need to identify all facts, obstacles, and alternatives. While this approach focused on process and the group as the unit of analysis, it did little to link the group's performance to on-going needs of the group or the larger organization within which most groups function. To be effective, a group must not only achieve its immediate objectives (decisions/actions) but also the objectives of its host organization. Groups must be viewed as social systems that serve standard system functions parallel to those served by the larger social systems of which they are a part.
Parsons (1959) suggested that all social systems must solve four basic problems: adaptation, goal achievement, integration, and pattern maintenance. These four problems reflect the immediate and future needs of a group as well as the internal and external ones. In order for a group to be effective, it must produce results valued by its members and its environment (goal achievement). To accomplish this, it must settle conflicts and direct the motivations of its members (integration), ensure continuity through education or expressive activities (pattern maintenance), and accommodate the demands of the environment (adaptation). Just as in organizations, the effectiveness of a group can be conceptualized on the basis of how well these problems are addressed. What is needed to improve the understanding of a group's decision process is a conceptual framework with multiple criteria by which to evaluate group decision process effectiveness. The Competing Values Approach provides such a framework that focuses on process characteristics (rather than outcomes), uses the group (rather than the individual members) as the unit of analysis, and takes into account the basic functions of the group as embedded in the larger organization.
The Competing Values Approach
The Competing Values Approach (CVA) to organizational analysis (Quinn, 1988; Quinn & Rohrbaugh, 1983; Lewin & Minton, 1986) was proposed originally to clarify the construct space occupied by "effectiveness," the ultimate dependent variable that lies at the very center of all organization theory (Cameron & Whetten, 1982). The earliest work in developing the CVA was a multidimensional scaling project that identified three axes undergirding judgments about the similarity of 16 commonly used criteria for assessing organizational performance (Campbell, 1977); the same three-dimensional space was found again as the result of a larger, replication study of organizational researchers and theorists (Quinn & Rohrbaugh, 1981, 1983). One dimension that separated criteria was interpreted as reflecting differing preferences for focus; some criteria had a more internal, person-oriented focus, while others were more externally, environmentally oriented. A second dimension was interpreted as reflecting differing preferences for structure; some criteria were concerned with flexibility and change, others with stability and control. A third dimension was interpreted as reflecting whether criteria were closer to organizational processes or outcomes, a means-ends continuum.
An important contribution of the CVA lies in the connection drawn between these three value dimensions of organizational analysis and Parson's theory of functional prerequisites for any system of action (Parsons, 1959; Hare, 1976, 12-15). As shown in Figure 1, an orthogonal representation of the first two dimensions of competing values (i.e., focus-internal to external-and structure-flexibility to control) yields four distinct models of organizational analysis in quadrants that match Parson's specification of functional prerequisites:
* the internal processes model (where the primary function is integration);
* the rational goals model (where the primary function is goal attainment);
* the open systems model (where the primary function is adaptation); and
* the human relations model (where the primary function is pattern maintenance and tension management).
The third value dimension, the means-ends continuum, is reflected in each model, since each model is concerned with both the process and outcome effectiveness of an organization.
Work on the CVA has proceeded beyond the organizational level of analysis. At the individual level, managerial performance appraisal has been conceptualized through the application of the CVA (Denison, Hooijberg, & Quinn, 1995; Quinn, 1984; Quinn et al., 1990). The fulfillment of eight specific managerial roles has been linked to successful performance of an organization in all four domains identified above: internal processes (coordinator and monitor roles), rational goals (director and producer roles), open systems (innovator and broker roles), and human relations (mentor and mediator roles). Recent work has also applied the CVA in evaluating managerial communication (Quinn, Hildebrandt, Rogers & Thompson, 1991; Rogers & Hildebrandt, 1993).
More recently, the application of the CVA to the performance literature at the group level of analysis (McCartt & Rohrbaugh, 1989, 1995; Reagan & Rohrbaugh, 1990) has led to the identification of four perspectives concerning the effectiveness of group decision processes:
* the empirical perspective (corresponding to the internal processes model);
* the rational perspective (corresponding to the rational goals model);
* the political perspective (corresponding to the open systems model); and
* the consensual perspective (corresponding to the human relations model).
The four perspectives reflect competing values because they emphasize what often appear as conflicting demands on any system of group decision support. Consistent with Parsons, the values most salient to the political perspective (instrumental, external concerns) differ strikingly from those most salient to the empirical perspective (consumatory, internal concerns); similarly, the values undergirding the consensual perspective (instrumental, internal concerns) are distinct from those undergirding the rational perspective (consumatory, external concerns). For this reason, individual evaluators of group decision process effectiveness may depend upon performance criteria that most reflect their own values.
The empirical perspective. Evaluators of collective decision processes who take an empirical perspective (primarily focused on internal, consumatory concerns) would stress the importance of documentation in a decision process. Particular attention should be directed to securing relevant information and developing large and reliable data bases to provide decision support. Proponents of this perspective, typically trained in the physical and social sciences (especially management information systems), believe that to be effective a decision process should allow thorough use of evidence and full accountability.
The rational perspective. Evaluators of collective decision processes who take a rational perspective (primarily focused on external, consumatory concerns) emphasize clear thinking as the primary ingredient for effective decision-making. From this very task-oriented perspective (particularly common in management science and operations research), any decision process should be directed by explicit recognition of organizational goals and objectives. Methods that efficiently assist decision makers as planners by improving the consistency and coherency of their logic and reasoning would be highly valued.
The political perspective. Evaluators of collective decision processes who take a political perspective (primarily focused on external, instrumental concerns) encourage flexibility and creativity in approaches to problems. Idea generation ("brainstorming") would be judged on how attuned participants were to shifts in the problem environment and on how well the standing of the group was maintained or enhanced. The search for legitimacy of the decision (i.e., its acceptability to outside stakeholders who are not immediate participants but whose interests need to be represented) would be notable through a fully responsive, adaptable process.
The consensual perspective. Evaluators of collective decision processes who take a consensual perspective (primarily focused on internal, instrumental concerns) expect full participation in meetings allowing for open expression of individual feelings and sentiments. Extended discussion and debate about conflicting concerns should lead to collective agreement on a mutually satisfactory solution. As a result, the likelihood of support for the decision during implementation would be increased through such team building. This very interpersonally-oriented perspective is dominant in the field of organization development.
Associated with each of the four perspectives in the CVA are two criteria, one of which provides a standard for the nature of the process (i.e., empirical-data-based, rational-goal-centered, political-adaptable, and consensual-participatory) and one of which assesses the ends or outcomes achieved (i.e., empirical-accountability, rational-efficiency, political-legitimacy, and consensual-supportability). Altogether, eight criteria of group decision process effectiveness are identified; these criteria and the perspectives to which they pertain are juxtaposed as the CVA quadrants in Figure 2.
Group Decision-making Process: Diagnosing Strengths and Weaknesses
How would one characterize the strengths and weaknesses of decision making in an administrative committee, management task force, expert commission, or executive team? The answer requires a variety of criteria, questions, participants, and decisions.
A variety of criteria, not just one standard. The process that a group uses to make a decision should be assessed against multiple standards of effective performance. The CVA suggests eight distinct criteria for evaluating group decision making: adaptability, legitimacy, efficiency, goal centeredness, accountability, data based, participatory, and supportability. To use only one or two standards will ignore the many other ways in which groups develop special capabilities (and liabilities) in conducting their collaborative work. Any useful performance assessment should be mindful of all the alternative perspectives on what makes a group effective: consensual, political, empirical, and rational.
A variety of questions, not just one measure. The evaluation literature (see, for example, Rossi & Freeman, 1993) has documented the value of approaching any assessment with multiple forms of data gathering. Both the reliability and validity of a single question can be very low, since most complex constructs (such as "efficiency" or "legitimacy" of a group decision-making process) have multiple facets. Is a decision process fully participatory if everyone merely is issued an invitation to join in the meeting? What if no one attends the meeting-or if everyone attends, but few people speak? In short, there are many aspects that influence the level of participation achieved in group decision making-or the level of efficiency, legitimacy, accountability, and all the other effectiveness dimensions, as well. No single question can cover every one of these aspects.
A variety of participants, not just one person. Two or more witnesses can watch the same series of events yet produce sharply different accounts of the situation, even as relatively unbiased observers. It is not surprising that group participants (with distinct interests, responsibilities, objectives, and concerns) offer diverging descriptions of a shared decision-making process. Group leaders may defend (explicitly or implicitly) meetings that they organize, minimizing the frequency or importance of process-related complaints, while subordinates whose views do not prevail at the end of discussions may become especially disaffected. Although one may never establish the "truth" with complete objectivity, a thorough evaluation of group decision process effectiveness can note where participants agree in their observations-and where they do not.
A variety of decisions, not just one problem. Few groups unfold exactly the same process of decision-making for every problem or opportunity that they confront. Some decisions are given more time, some less. Some discussions are rich with available information, some poor. Some disagreements eventually give way to wide consensus, some to narrow votes. To assess process effectiveness, no single meeting should become the focus of evaluation. Only by inquiring about several group decisions is a coherent and interpretable performance pattern likely to emerge: participants may agree that their discussions almost always have been admirably goal centered or worrisomely inefficient, regardless of the problem involved. Then, too, large differences may emerge from meeting to meeting with respect to participants' assessments of decision legitimacy or accountability.
To provide a better understanding of the how this approach works, the use of a variety of criteria, questions, participants, and decisions to evaluate group decision process effectiveness using the CVA is illustrated below in a brief case example. The case is also suggestive of, and followed by a discussion about, the potential implications that the CVA holds for organization development.
An Illustrative Case
Following participation in a workshop on systems thinking, a seven-member executive team agreed to monitor and review the strengths and weaknesses of their own group decision-making behavior. They identified three crucial decisions in which they had been involved over the preceding year. These three decisions had substantially influenced the nature of work in their organization: a) a major reallocation of office space, b) substantial investment in new technology to replace outmoded equipment, and c) the selection of new leadership for one of the largest divisions of the organization. Each member of the executive team independently completed a brief, 48-item questionnaire for each decision.
The questionnaire was designed specifically to produce eight distinct measures of group decision process effectiveness in a manner consistent with the CVA . A series of empirical studies over the past ten years has provided evidence supporting the validity of the measures contained in the questionnaire (McCartt & Rohrbaugh, 1989, 1995; Reagan & Rohrbaugh, 1990). Scale reliabilities have ranged from .60 to .80, while scale intercorrelations have indicated reasonable discriminant validity: on average about 20 percent of the measurement variance shared between pairs of scales.
To illustrate the use of this instrument, the questions and complete results for two of the eight measures-participatory process and goal-centered process-are detailed in Figure 3. The responses of each member of the executive team to each of ten questions (five for participatory process and five for goal-centered process) are tabled with codes for the Likert-type response categories used in the questionnaire (strongly agree-SA; generally agree-GA; agree a little-A; disagree a little-D; generally disagree-GD; and strongly disagree-SD). Three of the ten questions shown in Figure 3 were worded in a negative rather than positive direction to break a single response set (i.e., any tendency to consistently agree-or consistently disagree-with all of the questions); these negatively worded questions were reverse coded.
The mean response to each question across the seven group members, as well as the mean response of each group member across the five questions, are provided in Figure 3 for each of the three decisions. Careful inspection of Figure 3 reveals that, on the whole, David and Charles responded more positively to these ten evaluation questions, while Linda and Roberto responded less positively. As a group, the executive team was more self-critical about the extent of goal centeredness in their decisions (for space, equipment and personnel, the means were 3.9, 3.5, and 3.7, respectively) than the level of participation afforded (for space, equipment, and personnel, the means were 4.7, 4.9, and 5.1, respectively). The full set of measures for the three decisions made by this executive team are illustrated in Figure 4: the measures of the eight effectiveness criteria are plotted on the corresponding axes to form an overall profile for each decision. When perceptions of a decision process were more positive, the profile was extended outward on an axis. Concavities in a profile indicate aspects of decision process effectiveness that the executive team perceived to be weak.
Figure 4 makes evident that members of the executive team positively regarded both the participatory process and the legitimacy of all three decisions (means of approximately 5.0 or higher). The evaluations of supportability, adaptability, and efficiency were less consistent across decisions. For example, the space reallocation process was viewed as most adaptable with a decision considerably higher in supportability than the other two; the equipment decision was clearly least efficient. On the whole, all four measures in the bottom two quadrants (from the empirical and rational perspectives) were indicative of process weaknesses (means of approximately 4.0 or lower). Diagnostic information such as that shown in Figure 4 can be highly useful feedback in fostering organization development and change, as discussed below.
Group Decision-making Process: Feedback and Change
The group decision process effectiveness questionnaire (McCartt & Rohrbaugh, 1995; Rohrbaugh, 1992; Reagan & Rohrbaugh, 1990) currently is the only diagnostic instrument validated for use with an administrative committee, management task force, expert commission, or executive team. It is unique in applying a comprehensive conceptual framework, the CVA, to provide feedback about the strengths and weaknesses of group deliberation with explicit indicators of members' satisfaction with eight distinct process criteria. This information, derived from each individual's responses to the questionnaire, can be summarized explicitly for each group member, as well as for the group as a whole (Figure 4). The clarification, interpretation, and discussion of such feedback has been found to provide a significant learning opportunity for decision-making groups.
Typically, discussion concerning the results of the group decision process effectiveness questionnaire has focused on three issues. First, differences between specific decision processes (in the illustrative case above, the space decision, the equipment decision, and the personnel decision) are examined. The group should develop some confidence that the results are valid, in that concavities in the depiction of their process effectiveness, in fact, do coincide with their judgments about weaknesses in previous deliberations. Second, differences in members' judgments are explored, so that the group can come to an understanding of individual dissatisfactions with certain aspects of the decision-making process, even though these perceptions may not be widely shared. Frequently, group members disagree in their views about their process effectiveness on at least one or more of the eight criteria; examining the basis for these disagreements is essential for group development to occur. Finally, the group begins to build some consensus that there are some clear strengths, as well as some apparent weaknesses, in the way in which they work together in making key decisions.
The larger and more detailed issues of appropriate diagnosis, feedback, and planning for organization development are certainly beyond the scope of this paper (see, for example, Burke, 1994; French & Bell, 1990). The involvement of a well trained and experienced group facilitator can provide considerable assistance to a group as its members begin to think more explicitly about the ways in which they have approached critical decisions in the past and the ways in which they intend to approach critical decisions in the future. Groups will become ready to change their decision processes only to the extent that they are truly dissatisfied with certain aspects of their current approaches, truly anxious to improve their professional competencies by adopting new and better methods of group work, and truly confident that changing their ways of working together will advance rather than impair their collaboration. A skillful facilitator can offer considerable assistance in preparing a group for needed change.
Planning Change in Group Decision Processes: Using the CVA
All too frequently, groups react to difficulties in their decision processes by building on their strengths rather than by responding to their weaknesses. For example, if problems accrue from weaknesses in adaptability and legitimacy in decision-making (i.e., from a political perspective), an expert commission may determine to be only the more data-based and accountable in its process. A management task force already highly committed to building participation in and support for its efforts (i.e., from a consensual perspective) may respond to criticism by doubling its efforts to more fully involve multiple stakeholders and constituencies in its meetings. Consistent with the CVA (Quinn & Rohrbaugh, 1983), however, the imperative for such groups is to change in ways for which they may be least prepared.
A group such as the executive team described in the illustrative case above appears to have adopted the process values of the consensual and political perspectives, succeeding reasonably well at participation, supportability, legitimacy, and, to a less consistent extent, adaptability (Figure 4). From the empirical and rational perspectives, however, considerable process improvement is warranted: increasing its use of information (data), accountability, goal centeredness, and efficiency. Is this executive team dissatisfied enough with its performance, anxious enough to improve its professional competencies, and confident enough in its potential to enhance its collaborative work that it will plan to change in the empirical and rational domains of group decision process? This executive team may find it difficult to make their decision processes more data-based, goal-centered, efficient, or accountable, since these may be areas of effectiveness (or ineffectiveness) that they do not value so highly, or for which they may not have the requisite knowledge or skills.
The areas of strengths and weaknesses identified by the CVA can assist groups-and their facilitators-to leverage change efforts by suggesting specific interventions that directly target their weaknesses. For instance, as shown in Figure 5, an administrative committee weak in the empirical perspective can select from a wide variety of tools designed to strengthen accountability and the use of data, such as better record keeping and decision support systems. Decision structures, fishbone diagrams, and evaluation matrices can be used to promote goal centeredness and efficiency, strengthening a group's performance from the rational perspective. Group weaknesses made explicit in each of the four quadrants can guide a facilitator's selection of specific methods that may strengthen the group's performance and, thereby, boost the group's overall effectiveness.
Conclusion
Evaluating the strengths and weaknesses of group decision-making processes is not only a critical task but also a difficult one. As a consequence, many approaches have been developed that have provided some insight without providing a comprehensive framework necessary to guide efforts to improve group performance. The competing values approach, however, does offer such a framework. The CVA provides facilitators with a method to identify a group's strengths and weaknesses that emphasizes the importance of group (not individual) processes (not outcomes) as they exist within a larger context. In doing so, the model instructs efforts to differentiate between and choose from a wide variety of tools intended to enhance the effectiveness of group decision-making processes. Such a method should prove invaluable as the number of work groups and suggested intervention strategies continues to grow.
References
Bales, R. F. & Cohen, S. P. (1979). SYMLOG: A system for the multiple level observation of groups. New York: Free Press.
Bennis, W. G. & Biderman, P. W. (1997). Organizing genius: The secrets of creative collaboration. Reading, MA: Addison-Wesley.
Bostrom, R. P., Watson, R., & Kinney, S. T. (Eds.) (1992). Computer augmented teamwork: A guided tour. New York: Van Nostrand Reinhold.
Burke, W. W. (1994). Organization development: A process of learning and changing. Reading, MA: Addison-Wesley.
Cameron, K. S. & Whetten, D. A. (1982). Organizational effectiveness: A comparison of multiple models. New York: Academic Press.
Campbell, J. P. (1977). On the nature of organizational effectiveness, In P. S. Goodman & J. M. Pennings (Eds.), New perspectives on organizational effectiveness. San Francisco: Jossey-Bass.
Coleman, D. & Khanna, R. (Eds.) (1995). Groupware technologies and applications. Englewood Cliffs, NJ: Prentice Hall.
Denison, D. R., Hooijberg, R., & Quinn, R. E. (1995). Paradox and performance: Toward a theory of behavioral complexity in managerial leadership. Organization Science, 6, 524-540.
Dyer, W. G. (1995). Team building: Current issues and new alternatives. Reading, MA: Addison-Wesley.
French, W. L. & Bell, C. H., Jr. (1990). Organizational development. Englewood Cliffs, NJ: Prentice Hall.
Hare, A. P. (1976). Handbook of small group research. New York: Free Press.
Janis, I. L. & Mann, L. (1977). Decision making. New York: Free Press.
Kleindorfer, P. R., Kunreuther, H. C., & Schoemaker, P. J. H. (1993). Decision sciences: An integrative perspective. New York: Cambridge University Press.
Lewin, A. Y. & Minton, J. W. (1986). Determining organizational effectiveness: Another look and an agenda for research. Management Science, 32, 514-538.
McCartt, A. T. & Rohrbaugh, J. (1989). Evaluating group decision support system effectiveness: A performance study of decision conferencing. Decision Support Systems, 5, 243-253.
McCartt, A. T. & Rohrbaugh, J. (1995). Managerial openness to change and the introduction of GDSS: Explaining initial success and failure in decision conferencing. Organization Science, 6(5), 569-584.
McGrath, J. E. (1984). Groups: Interaction and performance. Englewood Cliffs, NJ: Prentice Hall.
Morecroft, J. D. W. & Sterman, J. D. (1994). Modeling for learning organizations. Portland, OR: Productivity Press.
Parsons, T. (1959). General theory in sociology. In R. Merton, L. Broom & L. S. Cottrell, Jr. (Eds.), Sociology today: Problems and prospects. New York: Basic Books.
Quinn, R. E. (1984). Applying the competing values approach to leadership: Toward an integrative framework. In J. G. Hunt, D. Hosking, C. Schriesheim & R. Stewart (Eds.), Leaders and managers: International perspectives on managerial behavior and leadership. New York: Pergamon Press.
Quinn, R. E. (1988). Beyond rational management. San Francisco: Jossey-Bass.
Quinn, R. E., Faerman, S. R., Thompson, M. P., & McGrath, M. R. (1990). Becoming a master manager: A competency framework. New York: John Wiley & Sons.
Quinn, R. E., Hildebrandt, H. W., Rogers, P. S., & Thompson, M. (1991). A competing values framework for analyzing presentational communication in management contexts. The Journal of Business Communication, 28, 213-232.
Quinn, R. E. & Rohrbaugh, J. (1981). A competing values approach to organizational effectiveness. Public Productivity Review, 5, 122-140.
Quinn, R. E. & Rohrbaugh, J. (1983). A spatial model of effectiveness criteria: Towards a competing values approach to organizational analysis. Management Science, 29, 363-377.
Quinn, R. E., Rohrbaugh, J., & McGrath, J. E. (1985). Automated decision conferencing: How it works. Personnel, 62, 49-55.
Reagan, P. & Rohrbaugh, J. (1990). Group decision process effectiveness: A competing values approach. Group & Organization Studies, 21, 20-43.
Rogers, P. S. & Hildebrandt, H. W. (1993). Competing values instruments for analyzing written and spoken management messages. Human Resource Management, 32, 121-142.
Rohrbaugh, J. (1989). Demonstration experiments in field settings. In I. Benbasat (Ed.), The information systems research challenge: Experimental research methods, Vol. 2. Boston: Harvard Business School.
Rohrbaugh, J. (1992). Cognitive challenges and collective accomplishments: The University at Albany. In R. P. Bostrom, R. Watson & S. T. Kinney (Eds.), Computer augmented teamwork: A guided tour. New York: Van Nostrand Reinhold.
Rossi, P. H. & Freeman, H. E. (1993). Evaluation: A systematic approach. Newbury Park, CA: Sage.
Schrage, M. (1995). No more teams: Mastering the dynamics of creative collaboration. New York: Currency Doubleday.
Schwarz, R. M. (1994). The skilled facilitator: Practical wisdom for developing effective groups. San Francisco: Jossey-Bass.
Sillars, A., Coletti, S. F., & Rogers, M. A. (1982). Coding verbal conflict tactics. Human Communications Research, 9, 73-95.
Sims, H. P., & Manz, C. M. (1984). Observing leader verbal behavior: Toward reciprocal determinism in leadership theory. Journal of Applied Psychology, 69, 222-232.
VanGundy, A. B. (1988). Techniques of structured problem solving (2nd ed.). New York: Van Nostrand Reinhold.
Zander, A. (1979). The study of group behavior during four decades. Journal of Applied Behavioral Science, 15, 272-282.
Bradley E. Wright
University at Albany, State University of New York, bw3812@cnsvax.albany.edu
John Rohrbaugh
Department of Public Administration and Policy, University at Albany, State University of New York
Copyright International Association of Facilitators Winter 1999
Provided by ProQuest Information and Learning Company. All rights Reserved