首页    期刊浏览 2025年07月23日 星期三
登录注册

文章基本信息

  • 标题:A culture of accountability
  • 作者:Welsh, John F
  • 期刊名称:The Community College Enterprise
  • 印刷版ISSN:1541-0935
  • 出版年度:2003
  • 卷号:Spring 2003
  • 出版社:Schoolcraft College

A culture of accountability

Welsh, John F

The research study examines faculty and administrator perspectives on the importance of institutional effectiveness activities at both two-year and four-year colleges and universities, measuring the impact of four predictor variables. Data drawn from institutions accredited by the Southern Association of Colleges and Schools reveals little difference between faculty and administrator perspectives at two-year institutions, but significant differences between faculty and administrators at four-year institutions. The paper closes with a discussion of the implications of these findings for the cultivation of administrative and faculty support for institutional effectiveness activities at both twoand four-year institutions.

Introduction

The cultivation of faculty and administrative support for institutional effectiveness activities has become a major priority for the entire array of higher educational institutions. Primarily driven by reporting requirements of the federal government, state higher education agencies, and regional and specialized accrediting agencies, colleges and universities in the United States are under considerable pressure to develop programs that collect and use information to document and improve institutional performance. Alexander (2000) calls the trend "the changing face of accountability" and observes that oversight agencies and stakeholders increasingly view data-driven improvement processes as the primary policy levers to ensure that colleges and universities respond appropriately to public expectations. The work of Peter Ewell (1989; 1991; 1994) and Trudy Banta (1993; 2002; Palomba and Banta, 1999) confirm the emergence of a "culture of accountability" in higher education.

Accreditation agencies have assumed a pivotal role for implementing the "new accountability" by adopting criteria and processes that require institutions to generate and use information for improvement. Institutional accrediting agencies, in particular, have worked to institutionalize and operationalize the new accountability in higher education through accreditation policies that require data-driven improvement processes, variously labeled "quality assurance," "quality improvement," or "institutional effectiveness."

For instance, in 1985, the Southern Association of Colleges and Schools (SACS) adopted the term "institutional effectiveness" to describe data-driven, quality improvement, strategic management processes (1998). SACS was one of the first institutional accrediting agencies to adopt criteria that require such activities. They typically include, but are not limited to, strategic planning, outcomes assessment, and program review. A review of the accreditation criteria of the six regional accrediting associations in the United States indicates that the term "institutional effectiveness" differs from the terminology used by other institutional accrediting agencies, but it does not differ in its broad intent, nor in what exceptions it conveys for institutional accountability from the terms used by other institutional accrediting agencies.

As colleges implement institutional effectiveness programs, they confront many technical and organizational challenges in the collection, formatting, dissemination, and reporting of information about performance. An equally significant challenge they confront is the cultivation of support and the development of an institutional consensus on the importance of institutional effectiveness activities. While almost all of the literature on institutional effectiveness is anecdotal, polemical, or technical, Morse and Santiago (2000), Gray and Banta (1997), Nichols (1995), and Birnbaum (2000) all demonstrate that stakeholder consensus on the importance of institutional effectiveness is paramount to the successful implementation of institutional effectiveness activities. Very little research exists on how such activities have actually improved institutions, but there is mounting evidence about the barriers to both faculty and administrative support for them. Amey (1999), Ewell (1989), and Palomba and Banta (1999) argue that barriers to faculty and administrative support include lack of sustained attention by institutional leadership, systems poorly designed to use results, and lack of incentives to encourage faculty participation. Nichols (1995) suggests that these factors have a particularly acute effect on faculty, whose resistance tends to be the most important reason why institutional effectiveness activities fail. Benjamin (1994) and Ohmann (2000) suggest that faculty opposition to institutional effectiveness activities is primarily rooted in a concern about the potential loss of institutional autonomy and academic freedom.

Thus, higher education faces a perplexing set of circumstances: institutional effectiveness activities are becoming increasingly institutionalized practices in colleges and universities, but constituent support-particularly faculty support-for them is weak. Nevertheless, some evidence indicates that institutional characteristics may partially mitigate somewhat faculty and administrator responses to expectations for institutional effectiveness activities (Ewell, 1989; Moran & Volkwein, 1988). Ewell (1991; 1994), Buckner (1996), Kreider (1991), and Friedlander and MacDougall (1990) suggest that community colleges and other two-year institutions have noticeably different challenges from four year institutions as they attempt to cultivate support for institutional effectiveness programs. Birnbaum (1989), and Cohen and March (1986) argue that different organizational patterns within institutions affect the ability of leadership to build support for institution-wide programs such as institutional effectiveness activities.

Welsh and Metcalf (2003), and Welsh and Metcalf (in press) demonstrate two salient points about predictors of faculty and administrative support for institutional effectiveness activities. First, while academic administrators evince greater support than faculty, the status of the respondent as faculty or administrator is not a significant predictor of support for institutional effectiveness activities when four related attitudinal variables are included in the analysis. This finding suggests that institutions must pay attention to cultivating both faculty and administrative support if they expect their effectiveness activities to succeed. Second, institutional type appears to affect faculty and administrative support for institutional effectiveness, but the differences have not been explored in any systematic way.

Research questions and the population

The purpose of the research is to explore the sources of faculty and administrative support for institutional effectiveness activities, measuring the impact of institutional type and four attitudinal variables. The research attempts to address three questions about faculty support for the implementation and development of institutional effectiveness activities in higher education. First, how do faculty and administrators at two-year institutions compare with their counterparts at four-year institutions in their attitudes toward support for institutional effectiveness activities? Second, within each type of institution, how do faculty and administrators compare in their attitudes toward support for institutional effectiveness activities. Third, if there are differences in these two comparisons, what factors help understand them?

Research methodology

Population and sample

While each of the six regional accrediting agencies in the United States has policies and specific accreditation criteria pertaining to institutional effectiveness, assessment, and program evaluation, the different terminology makes a national study of faculty and administrative support using a single instrument extremely difficult. Drawing a sample of respondents from institutions within a region controlled by one institutional accrediting agency permits the use of terminology on a questionnaire that is consistent, understandable, and familiar within the region.

Thus, the research questions for this study were asked through a mailed questionnaire distributed to faculty and academic administrators during Fall 2000 at the 168 institutions reviewed by evaluation teams of the Southern Association of Colleges and Schools (SACS) between September 1998 and May 2000 for either initial accreditation or reaffirmation of accreditation. The population for the study consisted of (1) full-time faculty who had served on institutional accreditation steering committees and (2) academic administrators at the dean's level or higher at granting institutions that hosted SACS accreditation site teams between September 1998 and May 2000. Faculty were identified as those full-time employees whose primary duty is classroom teaching. Academic administrators were defined as employees who hold the position of president, academic vice-president or dean of instruction, or dean of an academic unit or division. Faculty respondents were identified through the SACS liaisons and self-study committee chairs at each institution. Academic administrators at participating institutions were identified through the 2000 Higher Education Directory. Through participation in self-study steering committees, faculty respondents were actively involved in the institutional effectiveness process by evaluating and documenting findings for outside evaluators. The rationale for selecting this population is based on the need to query respondents who have basic and current knowledge about SACS institutional effectiveness criteria as well as institutional effectiveness processes at their institution. Selection of faculty and academic administrators from the SACS region helped ensure consistency in the working knowledge of accreditation terminology and practices, since each institution included in the population analyzed its compliance with the same criteria statements.

The population consisted of an actual respondent pool of 1245, including 704 faculty members and 541 academic administrators. No sampling procedures were necessary since all potential respondents were included. The 386 responding faculty represented a 54.8% response rate and the 294 responding administrators represented a 54.3% response rate. A test for response bias indicated no significant relationship between respondent group (faculty/academic administrator) and response to questionnaire, [chi]^sup 2^ = .029, N. S.

The number of cases greatly exceeded the minimal N of 364 derived from the statistical power requirements for multiple regression adopted by the researchers: (a) ability to detect an R squared of at least .20, (b) significance level of .01, and (c) statistical power of .90 (Cohen & Cohen, 1983). The data analyzed from 596 subjects included: (a) 112 faculty and 90 academic administrators from associate degree granting institutions, and (b) 191 faculty and 203 administrators from four-year institutions.

Instrument development

Of the two independent variables in the study, the first is the status of the respondent, whether faculty or administrator. The second is institutional type, two-year or four-year, measured by whether the institution is SACS approved to offer only associate degrees or baccalaureate and possibly graduate degrees. The dependent variable, perceived importance of institutional effectiveness activities, was defined as the degree to which respondents report that institutional effectiveness activities are important to their institution. The survey instrument included five indices designed to yield information about five attitudinal variables included in the research questions. The first index is Perceived Importance of Institutional Effectiveness Activities: the operational definition of faculty and administrative support for institutional effectiveness activities, the dependent variable in the study.

A review of research literature and commentary on institutional support for assessment and other data-based quality improvement strategies suggests that faculty support is affected by four predictor variables:

Perceived Definition of Quality Index

Since the mid-1980s, accrediting agencies, state coordinating boards, and the higher education policy community have promoted an outcomes-based definition of quality (Chaffe & Scherr, 1992; Palomba and Banta, 1997). Institutional effectiveness increasingly refers to initiatives oriented toward the measurement of an institution's progress toward fulfilling its mission or the fulfillment of expectations for student learning as measured by outcomes.

Outcomes-based initiatives require extensive faculty participation and support. An outcomes-based approach cannot be pursued merely by documenting that the conditions necessary for quality instruction to take place exist. Today, institutions must demonstrate the impact of instruction on student learning and faculty use of assessment results to improve instruction. Faculty who support the outcomes-based conception of quality appear more likely to support institutional effectiveness activities (Abraham-Ramirez, 1997; Clarke, 1997; Schilling & Schilling, 1998).

Internal vs. External Motivation Index

There is a prevailing sense that institutional effectiveness has been forced on institutions by external entities, such as state governments, the federal government and accrediting agencies. External accountability, not internal improvement, appears to be the primary motivation for the implementation of institutional effectiveness activities. If campus constituents, including faculty, believe that the activities are undertaken primarily to satisfy the standards of external groups, they will likely assign low levels of importance to them (Engelkemeyer, 1995). Seymour (1993) concurs but argues that internal motivators for institutional effectiveness activities are emerging and are likely to elevate internal institutional commitment to outcomes assessment and other data based approaches to quality improvement. Internal motivators, such as quality improvement, may increase faculty support for institutional effectiveness activities.

Depth of Implementation Index

Perceptions of the importance of institutional effectiveness activities can be affected by the extent to which they have been integrated into the overall fabric of the institution (Birnbaum, 2000). Thomas found that faculty support for institutional effectiveness increases the depth of implementation of these activities (1997). Clarke discovered that faculty are more likely to support institutional effectiveness activities if they believe that the activities will actually be implemented at their institution (1997). Institutions frustrate support for institutional effectiveness activities by implementing shallow processes aimed at satisfying external mandates, not aimed at institutional transformation. Faculty who perceive the investment of minimal institutional effort in order to meet standards for institutional effectiveness may also attribute lower levels of importance to the standards.

Reported Level of Involvement Index

Institutional management processes may inhibit broad campus support for institutional effectiveness activities because they do not encourage personal involvement of campus participants. However, the involvement of campus participants in any innovative process appears crucial for receptivity to change and innovation (Burgher, 1998; Richardson, 1988). Level of involvement was identified as an important predictor of the degree to which institutional effectiveness programs succeeded in two-year institutions in the United States (McClure, 1996; Thomas, 1997). Not surprisingly, those who participate more intensely in institutional effectiveness activities are more likely to understand their role and express support for the process. Institutions that seek to cultivate support for institutional effectiveness activities should optimize faculty involvement in the process.

The indices, including from nine to eighteen Likert scale questions, were developed specifically for this research, although Thomas (1997) conducted research using two indices with similar titles but dissimilar meanings: depth of implementation and internal vs external motivation. The instrument provided faculty respondents with an opportunity to comment on open-ended questions about institutional effectiveness activities, including suggestions for improving implementation. A panel of six postsecondary education professionals who specialize in institutional effectiveness and serve as SACS evaluators established the content validity of each index. The panel evaluated each item and also rated the adequacy of each index as a measure of the variable. After recommended changes, the panel rated each item and index as "good" or "excellent." The reliability of the instrument was established in the summer of 2000 through a pilot study of 48 faculty and administrators at SACS institutions, who were excluded from the final sample and analysis. The researchers used Cronbach's coefficient alpha to analyze data for each questionnaire item and index. With the exception of the Definition of Quality index, each coefficient alpha met or surpassed an r value of .70. The Definition of Quality index had a .52 coefficient alpha and warranted inclusion in the final research instrument.

The indices attempted to capture the salient features of each variable and create operational measures that, when combined, adequately addressed each concept. Items in each index were scored so that the higher the number assigned to a response, (1) the more positive the response toward institutional effectiveness, (2) the more the respondent perceived institutional improvement as the primary motive for institutional effectiveness, (3) the more the respondent believed an outcomes view of quality prevailed at the institution, (4) the deeper the implementation, and (5) the greater the individual's involvement in institutional effectiveness. Several of the items were reverse coded to protect against bias from response sets. The instrument also provided an opportunity to respond to open-ended questions about the implementation of institutional effectiveness activities, including suggestions for how to improve the activities.

Findings and analysis

Researchers used a combination of descriptive statistics, multiple analysis of variance and regression analysis to compare faculty and administrator support for institutional effectiveness activities at two-year and four-year institutions. They also determined the extent to which the four attitudinal variables, and faculty/administrator status predict perceptions of the importance of institutional effectiveness activities at two- and four-year institutions. The data supports three major observations.

First, Table 1 presents mean scores of faculty and administrators at both two- and four-year institutions for all five attitudinal indices. Administrators across both types of institutions demonstrate greater support as measured by the Perceived Importance of Institutional Effectiveness Index than faculty at both types of institutions. Table 1 reveals that administrators at both types of institutions also report higher scores on each of the four attitudinal variables than faculty. These differences are significant at the p

Second, Table 1 also demonstrates that respondents from the two-year institutions, both faculty and administrators, demonstrate greater support for institutional effectiveness activities than their counterparts at the four-year colleges and universities. A multiple analysis of variance, using institutional type (two- and four-year) and respondent category (faculty and administrator) as the independent variables, reveals that both independent variables have statistically significant effects on perceptions of the importance of institutional effectiveness activities and the four attitudinal control variables. Moreover, the multiple analysis of variance demonstrates that these differences are significant at the p

Third, Table 2 provides the results of a correlation analysis for respondents from two-year institutions. The correlation and regression analysis reveals that for associate degree-granting institutions, the predictors of Perceived Importance of Institutional Effectiveness are similar for both faculty and administrators. All four of the attitudinal variables measured on both types of respondents are significant predictors of the dependent variable (r^sup 2^=.738), with Internal Motivation ([beta] = .308, p

Fourth, survey results for faculty and administrators from the four-year institutions, however, differ. Table 3 demonstrates that, like the two-year institutions, all four attitudinal variables have significant positive correlations with the dependent variable, the Perceived Importance index. The four-year institutions had a total of 394 subjects: 191 faculty members and 203 administrators. For faculty, the three statistically significant predictors of the Importance of Institutional Effectiveness were Internal versus External Motivation, Definition of Quality, and Level of Involvement in Institutional Effectiveness.

The analysis revealed, however, that the entry of the interaction variables into the regression equation produces a significant increase in the proportion of explained variance in the dependent variable ([Delta]r^sup 2^=.017, p

The results demonstrate a clear difference between the two groups at the four-year institutions. For administrators, there are also three statistically significant variables, but they are not identical to those of faculty. The significant variables for administrators are: (a) Internal versus External Motivation, [beta]=.280 (p

Tables 1, 2 and 3 follow

Implications for practice

These results may be of interest to both administrators and faculty who are responsible for the management of facets of institutional effectiveness activities at their colleges. The results may also have slightly different implications for higher education professionals at two- and four-year institutions. Senior administrators and those responsible for institutional effectiveness programs at two-year institutions may want to note that the four attitudinal control variables all contribute to support for these programs among both faculty and administrators. They also want to consider that the structural features of their institutions may facilitate the cultivation of support for institutional effectiveness activities. Birnbaum (1989), for instance, suggests that senior administrators at two-year institutions may be better able to implement objectives-based management strategies, such as institutional effectiveness programs, than their counterparts at four-year institutions, because of more hierarchical organizational structures.

On the other hand, senior administrators and those responsible for institutional effectiveness programs at four-year institutions may want to note that there are some differences among the attitudinal factors that affect support for these activities. Faculty at four-year institutions, in contrast to administrators, place a premium on internal improvement and an outcomes view of educational quality. Thus, for faculty, it appears critical to the success of institutional effectiveness programs that the institution cultivate an environment which fosters internal improvement and outcomes-based educational quality. Administrators, in contrast to faculty, place a premium on depth of implementation, on "closing the loop" of institutional effectiveness activities. Administrators appear more concerned than faculty that the measurement of academic programs and institutional operations have an impact on the institution and its units. Thus, to cultivate administrative support for institutional effectiveness activities, four-year institutions should pay particular attention to the processes that have been constructed to feed the data, analyses and recommendations back into institutional change processes. Faculty and administrators at four-year institutions need also bear in mind that, following Birnbaum (1989), the "political contests" and "organized anarchy" of comprehensive and research universities may complicate the interest in cultivating support for institutional effectiveness activities.

It remains to be seen whether colleges and universities will succeed at either cultivating broad support for institutional effectiveness activities, or actually using the information to pursue dramatic, strategic change. The data from this sample provides some guidance, but it is important to bear in mind that the sampling process possibly skewed the results. For instance, the faculty respondents in the study may have been favorably predisposed to institutional effectiveness activities by virtue of their participation in accreditation self-study committees. They may have provided altogether different responses from faculty who have not participated in such groups. However, it does not appear that the sources of pressure to develop and implement institutional effectiveness activities in higher education will dissipate any time soon. The cultivation of both faculty and administrative support for institutional effectiveness activities in higher education will remain a challenge for some time to come, but the data from this study suggest that two-year institutions are very well positioned to meet it.

References

Abraham-Ramirez, H. D. (1997). Sources of influence on faculty members' receptivity to continuous quality improvement initiatives. Unpublished doctoral dissertation, The Pennsylvania State University, University Park, PA.

Alexander, F. K. (2000). The New Face of Accountability. The Journal of Higher Education 71(4): 411 - 431.

Amey, M. J. (1999). Faculty culture and college life: Reshaping incentives toward student outcomes. In J. Douglas Toma & Adrianna J. Kezar (Eds.) Reconceptualizing the collegiate ideal. San Francisco, CA: Jossey-Bass.

Banta, T. W. (1993). Making a Difference: Outcomes of a Decade of Assessment in Higher Education. San Francisco: Jossey-Bass.

Banta, T. W., and Associates. (2002). Building a Scholarship of Assessment. San Francisco, CA: Jossey-Bass.

Benjamin, E. (1994). From accreditation to regulation: The decline of academic autonomy in higher education. Academe 80 (4): 34-36.

Birnbaum, R. (1989). How colleges work. San Francisco, CA: Jossey-Bass.

Birnbaum, R. (2000) Management Fads in Higher Education. San Francisco: Jossey-Bass.

Bonvillian, G. & Dennis, T. L. (1995). Total quality management in higher education: Opportunities and obstacles. In Sims, Serbrenia J. & Sims, Ronald R. (Eds.), Total quality management in higher education: Is it working? Why or why not?, pp. 37-50. Westport, Connecticut: Praeger Publishers.

Buckner, C. S. (1996). Institutional climate and institutional effectiveness at three community colleges. Unpublished doctoral dissertation, East Tennessee State University, Johnson City, TN.

Burgher, R. L., (1998). The perceptions of support for innovation in mid-western liberal arts colleges. Unpublished doctoral dissertation, University of Iowa, Iowa City.

Chaffee, E. and Sherr, L. (1992). Quality: Transforming postsecondary education. ASHE-ERIC Higher Education Report Number 3. Washington, DC: The George Washington University.

Clarke, J. S. (1997). Personal and organizational structure correlates of receptivity and resistance to change and effectiveness in institutions of higher education. Unpublished doctoral dissertation, Louisiana State University and Agricultural and Mechanical College, Baton Rouge, LA.

Cohen, J. and Cohen, P. (1983). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. Hillsdale, NJ: Lawrence Erlbaum Associates.

Cohen, M. and March, J. (1986). Leadership in an Organized Anarchy. Cambridge, Massachusetts: Harvard Business School Publishing.

Engelkemeyer, S. W. (1995). Engaging faculty in continuous improvement and change initiatives. In Sims, Serbrenia J. & Sims, Ronald R. (Eds.), Total quality management in higher education: Is it working? Why or why not?, pp. 141-159. Westport, Connecticut: Praeger Publishers.

Ewell, P. T. (1989). Institutional Characteristics and Faculty/Administrator Perceptions of Outcomes: An Exploratory Analysis. Research in Higher Education 30(2): 13 - 36.

Ewell, P. T. (1990). Assessment and the "new accountability": A challenge for higher education's leadership. Denver, CO: Education Commission of the States.

Ewell, P. T. (1991, November). Effectiveness and student success in community colleges: Practices, realities, and imperatives. Paper presented at the Accountability and Assessment for the Community College System conference, Raleigh, NC.

Ewell, P. T. (1993). The role of states and accreditors in shaping assessment practice. In Trudy W. Banta (Ed.), Making a Difference: Outcomes of a Decade of Assessment in Higher Education (pp. 339-356). San Francisco: Jossey-Bass.

Ewell, P. T. (1994). A matter of integrity: Accountability and the future of self-regulation. Change 26 (6): 25-29.

Frielander, J. & MacDougall, P. R. (1990). Responding to mandates for institutional effectiveness. New Directions for Community Colleges 72 (4): 93-100.

Gray, P. J. & Banta, T. W. (eds.). (1997). The Campus-Level Impact of Assessment: Progress, Problems, and Possibilities. New Directions for Higher Education, no. 100. San Francisco: Jossey-Bass.

Kreider, P. E. (1991). Forward. In D. Doucette, & B. Hughes (Eds.), Assessing Institutional Effectiveness in Community Colleges. Laguna Hills, CA: The League for Innovation in Community Colleges.

McClure, T. R., Jr. (1996). A study of the impact of externally mandated institutional effectiveness and assessment activities on South Carolina technical colleges as perceived by technical college personnel in leadership roles. Unpublished doctoral dissertation, University of South Carolina.

McConnell, T. R. (1992). Accountability and autonomy. Journal of Higher Education 42: 445-451.

Moran, E. T. & Volkwein, J. F. (1988). Examining organizational climate in institutions of higher education. Research in Higher Education 28(4): 367-382.

Morse, J. A. & Santiago, G., Jr. (2000, January-February). Accreditation and faculty: Working together. Academe 86: 30-34.

Nichols, J. O. (1995). A Practitioner's Handbook for Institutional Effectiveness and Student Outcomes Assessment Implementation. New York: Agathon Press.

Ohmann, R. (2000, January-February). Historical reflections on accountability. Academe 86: 24-29.

Palomba and Banta. (1999). Assessment Essentials: Planning, Implementing and Improving Assessment in Higher Education. San Francisco: Jossey-Bass.

Petry, L. C. (1957). Faculty-administrative relationships: Report of a work conference. Washington, D.C.: American Council on Education.

Richardson, R. C., Jr. (1988). Improving effectiveness through strategic planning. Community College Review 15(4): 28-34.

Ryan, J. G. (1993). After accreditation: How to institutionalize outcomes-based assessment. New Directions for Community Colleges 83 (3): 75-81.

Schilling, K. M. & Schilling, K. L. (1998). Proclaiming and sustaining excellence: Assessment as a faculty role. ASHE-ERIC Higher Education Report, 26(3) Washington, DC: The George Washington University.

Seymour, D. (1993). On Q: Causing Quality in Higher Education. New York: ACE MacMillian.

Seymour, D. (1995). Once Upon a Campus: Lessons for Improving Quality and Productivity in Higher Education. Phoenix, AZ: ORYX Press.

Sherr, L. A., & Lozier, G. G. (1991). Total Quality Management in Higher Education. New Directions for Institutional Research, 18 (3): 3-11.

Southern Association of Colleges and Schools Commission on Colleges (1998). Criteria for Accreditation. Decatur, GA: Southern Association of Colleges and Schools Commission on Colleges.

Thomas, J. P. (1997). Innovation conditions and processes used in the adoption of institutional effectiveness in two-year colleges of the southern association of colleges and schools accreditation region. Unpublished doctoral dissertation, North Carolina State University, Raleigh.

Volkwein, J. F. & Malik, S. M. (1997). State regulation and administrative flexibility at public universities. Research in Higher Education 38 (1): 17-42.

Welsh, J. F. & Metcalf, J. (2003). Cultivating faculty support for institutional effectiveness activities: Benchmarking best practices. Assessment and Evaluation in Higher Education 28(1): 33 - 46.

Welsh, J. F. & Metcalf, J. (in press). Faculty and Administrator Support for Institutional Effectiveness Activities: A Bridge Across the Chasm? Journal of Higher Education.

John F. Welsh

Joseph Petrosko

Jeffrey Metcalf

Dr.Welsh is a Professor of Education in the Department of Leadership, Foundations and Human Resource Education at the University of Louisville in Louisville, Kentucky.

Dr. Petrosko is a Professor of Education in the Department of Leadership, Foundations and Human Resource Education at the University of Louisville in Louisville, Kentucky.

Dr. Metcalf is the Vice President for Planning and Assessment at Kentucky Christian College in Grayson, Kentucky.

Copyright Schoolcraft College Spring 2003
Provided by ProQuest Information and Learning Company. All rights Reserved

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有