首页    期刊浏览 2024年11月25日 星期一
登录注册

文章基本信息

  • 标题:Campus, online, or hybrid: an assessment of instruction modes.
  • 作者:Terry, Neil ; Lewer, Joshua
  • 期刊名称:Journal of Economics and Economic Education Research
  • 印刷版ISSN:1533-3604
  • 出版年度:2003
  • 期号:January
  • 语种:English
  • 出版社:The DreamCatchers Group, LLC
  • 摘要:This paper presents empirical results concerning the effectiveness of campus, online, and hybrid instruction in economics. The sample consists of graduate students enrolled in macroeconomic theory or international economics courses at a regional university. Assessment of enrollment, attrition, grade distribution, faculty evaluation, and course evaluation across the various instruction modes is presented. Holding constant ability, effort, and demographic considerations, students enrolled in the online course scored over six percent lower on the final exam than campus students and four percent lower than hybrid students. There is not a statistically significant difference between student performance on the final exam between campus and hybrid modes.
  • 关键词:Macroeconomics;Online education;Students

Campus, online, or hybrid: an assessment of instruction modes.


Terry, Neil ; Lewer, Joshua


ABSTRACT

This paper presents empirical results concerning the effectiveness of campus, online, and hybrid instruction in economics. The sample consists of graduate students enrolled in macroeconomic theory or international economics courses at a regional university. Assessment of enrollment, attrition, grade distribution, faculty evaluation, and course evaluation across the various instruction modes is presented. Holding constant ability, effort, and demographic considerations, students enrolled in the online course scored over six percent lower on the final exam than campus students and four percent lower than hybrid students. There is not a statistically significant difference between student performance on the final exam between campus and hybrid modes.

INTRODUCTION

There is little doubt that the online mode of instruction has become a major part of higher education and an important strategic issue for business schools. The U.S. Department of Education estimates that 100 new college courses are added to the online format each month (National Center for Education Statistics, 2001). In recent years, the efficacy of online instruction has been debated in the literature as the mode has become ubiquitous (Lezberg, 1998; Okula, 1999; Terry, 2000). One alternative to online instruction is the hybrid instruction mode. The hybrid mode combines some of the inherent features of the online (e.g., time independence) and campus (e.g., personal interaction) environments. The purpose of this paper is to compare student satisfaction and performance in the campus, online, and hybrid instruction modes. Standard assessment and regression techniques are employed. The research is based on graduate courses in macroeconomics and international economics at a regional university. The paper is organized as follows: First, an overview of concepts and definitions important to distinguishing the three instruction modes is provided. The next section presents assessment information relating to enrollment, attrition/drop rate, grade distribution, and student evaluation of faculty and course. Third, an empirical model testing the effectiveness of instruction mode while controlling for effort, ability, and demographic considerations is developed and employed. The final section offers conclusions and implications.

BACKGROUND

The fundamental characteristics of the campus, online, and hybrid instruction modes are not universally agreed upon. The authors acknowledge this lack of consensus but offer somewhat generic descriptions of each format in order to facilitate the research process. Campus-based or traditional instruction is probably the easiest to understand. The campus mode is characterized by student/faculty interaction via lectures, discussion, and exams on campus at scheduled times and days. There is approximately forty-five contact hours associated with a three credit hour course in most traditional campus courses. The personal interaction between students and faculty associated with campus courses is often perceived as a characteristic that facilitates high quality learning. In addition, most professors were educated via traditional campus instruction and are familiar with the learning environment from the perspective of student and instructor.

The online mode of instruction replaces the walls of the classroom with a network of computer communication. Some of the benefits of online instruction are its temporal, geographic and platform independence, and its simple, familiar and consistent interface. Some of the drawbacks are: sophistication and creativity restricted by hardware and software compatibility; resistance to shift to new and alternative teaching and learning paradigms; privacy, security, copyright, and related issues; and a lack of uniform quality (McCormack and Jones, 1998). Online instruction is heralded for providing flexibility for students in that it reduces the often-substantial transaction and opportunity costs associated with traditional campus offerings. This flexibility in structure is countered by potential problems including lack of personal interaction (Fann and Lewis, 2001), the elimination of a sense of community (James and Voight, 2001), and the perception of lower quality (Terry, 2000). In addition, faculty often have reservations about preparing a new online course because of the large initial time investment involved, estimated to be at 400-1,000 hours per course (Terry, Owens and Macy, 2000).

Not all students can take campus courses and not all want online instruction. The general problem with campus courses for working professionals is the time constraint, while the most common complaint about online courses is the lack personal interaction between students and professor that is often needed to facilitate the learning process, especially for advanced coursework. The hybrid mode is a potential solution that combines the positives from both modes. There are approximately eighteen to twenty-five contact hours associated with a three credit hour course. The decreased classroom contact time is offset by computer-based communication, which includes lecture notes, assignments, and e-mail correspondence. The hybrid mode allows busy graduate students and working professionals limited in class time, while maintaining an adequate amount of contact time with faculty and peers. The obvious criticism of the hybrid format is the potential that the instruction mode does not combine the best attributes of the campus and online formats but the worst attributes. The potential negative attributes of hybrid instruction include a feeling that there is an inadequate amount of time to cover lecture topics, double preparations for the instructor because the mode requires both lecture and online materials, and a lack of time and geographic flexibility with respect to the campus lecture component.

Results from this study are derived from 327 graduate business students enrolled in economics courses in the years 1998-2002. The study cohort consists of 99 campus, 134 online, and 94 hybrid students from two graduate sections of macroeconomic theory and two sections of international economics in each instruction mode, a total of twelve courses. Every effort was made to keep the content and course requirements consistent across the three instruction modes in order to make multiple comparisons viable. Half the student grade in each course is determined by homework assignments and the other half of the grade is determined by a proctored final exam. Twenty-five of the original 327 students dropped a course without taking the final exam, yielding a final research cohort of 302. Sixty-four percent of the students in the survey have full-time jobs. Fifty-five percent of the students have at least one child. Sixty-five percent of the sample population is male. Twenty percent of the students are foreign nationals. Eighty-two percent of the students in the survey live within a one-hour drive of campus.

ASSESSMENT RESULTS

Table 1 presents a multiple comparison of instruction modes across the common assessment criteria of enrollment, attrition/drop rate, grade distribution, student evaluation of faculty, and student evaluation of courses. The last three assessment variables are measured on a standard 4.0 scale, where 4.0 is the highest possible grade or score. Statistical differences in means are tested by employing a Kruskal-Wallis test for multiple comparison (Conover, 1980). The Kruskal-Wallis test is employed because it offers the most powerful test statistic in a completely randomized design without assuming a normal distribution. The results indicate average enrollment for the online instruction mode is significantly greater than the campus or hybrid alternatives. Because students have the option of enrolling in the instruction mode of his/her choice, the enrollment numbers imply the demand for the online mode is relatively high. Average enrollment for the online mode was over thirty-five percent higher than the alternative modes. The results imply the convenience associated with online instruction is attractive to the study cohort.

Attrition/drop is defined in this study as the difference between the number of students officially enrolled in the course on the first class day versus the number officially enrolled on the last class day. The results indicate a clear difference in attrition/drop rates across the instruction modes. The campus attrition rate of 4.04 percent is significantly lower than the online and hybrid rates of 9.70 percent and 8.51 percent, respectively. One possible explanation of this result is that student/faculty personal interaction is an important component in student retention. The fluidity and independence associated with the online mode might also result in a relative ease of exit. It is interesting to note that attrition for the hybrid mode is lower than the online mode, although the difference is not statistically significant.

The third assessment variable in the study is class grade distribution. This broad measure of student performance indicates that the research cohort earned significantly lower grades when completing coursework in the online format. The grade distribution for the hybrid mode is approximately the same as the campus mode. In general, it appears that the online format is inferior in quality based on relative student performance, although a more rigorous methodology with control variables should be employed before any broad conclusions can be reached. The results are tempered by the observation that faculty might be more inclined to give students the benefit of the doubt with respect to grading as the level of personal interaction increases. It is also possible that students selecting the campus or hybrid modes are more concerned about faculty and peer contact as a means of ensuring quality control. Students that prioritize the perception of higher quality might simply be more serious and successful with respect to classroom performance. Hence, the results might be biased by higher quality students self-selecting the campus and hybrid modes. Another possible explanation is that students that enroll in campus or hybrid courses tend to have lifestyles without excessive time rigidities, which might lead to opportunities to study more and earn higher grades.

The last two assessment terms in Table 1 are student evaluations of faculty and course. The results indicate that student evaluations of faculty and course are significantly lower for the online format than the campus or hybrid alternatives. The implication is that students are not as satisfied with online instruction. An obvious reason for the result is the potential confounding effect caused by the lower grade distribution. The lack of direct personal interaction is another possible reason students evaluates the online professor and courses relatively low.

MODEL AND RESULTS

The assessment results from the previous section provide a broad multiple comparisons of the campus, online, and hybrid instruction modes. The purpose of this section is to compare the effectiveness of the instruction modes employing a more rigorous methodology. Davisson and Bonello (1976) propose an empirical research taxonomy in which they specify the categories of inputs for the production function of learning economics. These categories are human capital (admission exam score, GPA), utilization rate (study time), and technology (lectures, classroom demonstrations). Using this taxonomy, Becker (1983) demonstrates that a simple production function can be generated which may be reduced to an estimable equation. While his model is somewhat simplistic, it has the advantage of being both parsimonious and testable. There are a number of problems that may arise in this type of work (Chizmar & Spencer, 1980; Becker, 1983). Among these are errors in measurement and multicollinearity associated with demographic data. Despite these potential problems, there must be some starting point for empirical research into the process by which economics is learned if we are to access various proposals as to how economics knowledge may best be imparted to our students.

Assume that the production function of learning for economics at the college level can be represented by a production function of the form:

(1) [Y.sub.i] = f([A.sub.i], [E.sub.i], [D.sub.i], [X.sub.i]),

where measures the degree to which a student learns economics, is information about the student's native ability, is information about the student's effort, is a [0, 1] dummy variable indicating demonstration method or mode, and is a vector of demographic information. As noted above, this can be reduced to an estimable equation. The specific model used in this study is presented as follows:

(2) SCOR[E.sub.i] = [B.sub.0] + [B.sub.1]ABILIT[Y.sub.i] + [B.sub.2]H[W.sub.i] + [B.sub.3]NE[T.sub.i] + [B.sub.4]HYBRI[D.sub.i] + [B.sub.5A]G[E.sub.i] + [B.sub.6]FOREIG[N.sub.i] + [u.sub.i].

The dependent variable used in measuring effectiveness of student performance is score (SCORE) on the comprehensive final exam. The variable associated with the final exam score is measured in percentage terms. The proxy for student's native ability (ABILITY) is based on the composite score of the GMAT exam plus the product of twice the upper-level (last 60 hours) undergraduate grade point average (GPA). For example, a student with a GMAT score of 600 and 3.5 GPA would have a composite score of 1300. Many business colleges use the composite score as part of the admission process. The percentage score on the homework assignments (HW) measures student effort. The homework grade is used to measure effort since students are not constrained by time, research material, or ability to ask the course instructor questions when completing the ten course assignments. Enrollment in a campus, online, or hybrid course is noted by the categorical variables NET (online course) and HYBRID.

The choice as to what demographic variables to include in the model presents several difficulties. A parsimonious model is specified in order to avoid potential multicollinearity problems. The demographic variables in the model relate to student age (AGE) and nationality (Foreign). The age variable is included in the model based on anecdotal evidence that distance learners are more mature and self-motivated (Kearsley, 1998; Okula, 1999). The model corrects for international students because the majority of international students in the MBA program elected to enroll in the campus course instead of the Internet class. Specifically, only nine international students completed the Internet course while forty-nine completed a campus course. While other authors have found a significant relationship between race and gender and learning economics (Siegfried & Fels, 1979; Hirschfeld, Moore, & Brown, 1995), the terms were not significant in this study. A number of specifications were considered using race, gender, MBA emphasis, hours completed, and concurrent hours in various combinations. Inclusion of these variables into the model affected the standard errors of the coefficients but not the value of the remaining coefficients. For this reason they are not included in the model. University academic records are the source of admission and demographic information because of the potential biases identified in self-reported data (Maxwell & Lopus, 1994). There are a total of 327 students in the initial sample, 25 students being eliminated from the study for dropping a course (Douglas & Joseph, 1995).

Results from the ordinary least squares estimation of equation (2) are presented in Table 2. None of the independent variables in the model have a correlation higher than .31, providing evidence that the model specification does not suffer from excessive multicollinearity. The equation (2) model explains 55 percent of the variance in final exam performance. Three of the six independent variables in the model are statistically significant. Of primary interest is the negative and significant coefficient associated with Internet instruction. Holding constant ability, effort, and demographic considerations, students enrolled in the Internet course scored over six percent lower on the comprehensive final exam. The empirical results provide evidence supporting the inferior quality criticism of Internet-based learning. The six-percent quality differential is not surprising since the mode is relatively new. It is reasonable to expect the quality gap between the campus and online instruction modes to narrow over time as faculty gain experience in the online environment and technological advances improve mode efficiency. Interestingly, the coefficient corresponding to the hybrid mode reveals that student scores on the final exam are two percent lower than the campus alternative but the coefficient is not statistically significant. The student performance results verify the grade distribution assessment results of the previous section as the campus and hybrid modes are shown to be approximately the same but significantly higher than the online instruction mode. Hence, the hybrid mode appears to supply quality that is equivalent to the campus mode with more time independence and flexibility.

The stability of the model's other coefficients implies that the model is somewhat robust. Ability as measured by the admission GMAT and GPA composite score has a positive and significant impact on final exam performance. Student effort as measured by percentage score on homework assignments yields a positive and significant coefficient. The effort variable does not accurately measure the amount of time that a student applied to the course since productivity is different across students and it is impossible to determine the length of time each student spends on a course homework assignment. The effort variable is more of a proxy for willingness to work until complete and adequate homework answers are obtained, organized, and presented to the course instructor. Certainly, ability and effort should be positively related to final exam performance in a random sample of college courses. The two demographic variables in the model have positive coefficients but are not statistically significant. Hence, age and nationality does not have a significant impact on final exam performance for the research cohort in this study.

CONCLUSIONS AND IMPLICATIONS

This study compares the online, campus, and hybrid modes of instruction. The research results indicate that the pure form of online instruction is the least preferred. Specifically, student performance, faculty evaluation, course evaluation were all significantly lower for the online mode of instruction compared to the campus and hybrid alternatives. The results should not be viewed as an indictment of online instruction since the format is still in the initial stage of development. It is almost certain that the gap in student satisfaction between online and campus courses will continually narrow as new technology and faculty sophistication in the environment improve over time via the learning by doing process. For institutions and faculty not willing to fully commit to the online mode at this point, the hybrid mode is a viable alternative that offers some flexibility but maintains the highest quality and student satisfaction. Retention is the only assessment area where hybrid is significantly worse than the campus format. Overall, it appears that personal interaction and community are an important part of the education experience. The hybrid mode provides a transition between campus and online, maintaining some level of physical interaction. Holding constant factors such as innate ability and effort, graduate students completing course in the hybrid mode tested at a level equivalent to the campus mode and significantly higher than the online mode. The results of this study are of a preliminary nature. Further research is needed before any definitive conclusions can be ascertained.

REFERENCES

Becker, W. (1983). Economic education research: New directions on theoretical model building. Journal of Economic Education, 14(2), 4-9.

Chizmar, J. & Spencer D. (1980). Testing the specifications of economics learning equations. Journal of Economic Education, 11(2), 45-49.

Conover, W. (1980). Practical Nonparametric Statistics. New York: John Wiley & Sons Publishing.

Davisson, W. & Bonello, F. (1976). Computer Assisted Instruction in Economics Education. University of Notre Dame Press.

Douglas, S. & Joseph, S. (1995). Estimating educational production functions with correction for drops. Journal of Economic Education, 26(2), 101-112.

Fann, N. & Lewis, S. (2001). Is online education the solution? Business Education Forum, 55(4), 46-48.

Hirschfeld, M., R. Moore & E. Brown. (1995). Exploring the gender gap on the GRE subject test in economics. Journal of Economic Education, 26(1), 3-15.

James, M. & Voight, M. (2001). Tips from the trenches: Delivering online courses effectively. Business Education Forum, 55(3), 56-60.

Kearsley, G. (1998). Distance education goes mainstream. Technological Horizons in Education, 25(10), 22-26.

Lezberg, A. (1998). Quality control in distance education: The role of regional accreditation. American Journal of Distance Education, 12(2), 26-35.

Mawell, N. & Lopus, J. (1994). The Lake Wobegon effect in student self-reported data. American Economic Review, 84(2), 201-205.

McCormack, C. & Jones, D. (1998). Building a Web-based Education System. New York: Wiley Publishing.

National Center for Education Statistics (2001, January 4). Quick tables and figures: Postsecondary education quick information system, survey on distance education at postsecondary education institutions, 1997-1998. http://nces.ed.gov

Okula, S. (1999). Going the distance: A new avenue for learning. Business Education Forum, 53(3), 7-10.

Siegfried, J. & Fels, R. (1979). Research on teaching college economics: A survey. Journal of Economic Literature, 17(3), 923-969.

Terry, N. (2000). The effectiveness of virtual learning in economics. The Journal of Economics & Economic Education Research, 1(1), 92-98.

Terry, N., Owens, J. & Macy, A. (2000). Student and faculty assessment of the virtual MBA: A case study. Journal of Business Education, 1(2), 33-38.

Neil Terry, West Texas A&M University

Joshua Lewer, West Texas A&M University
Table 1: Multiple Comparison of Instruction Modes

 Campus Online Hybrid

Sample Size 99 134 94
Average Enrollment 24.75 33.5 * 23.5
Attrition/Drop Rate (percent) 4.04 * 9.7 9.57
Class Grade Distribution (4.0 scale) 3.56 3.19 * 3.52
Faculty Evaluation (4.0 scale) 3.62 3.20 * 3.58
Course Evaluation (4.0 scale) 3.49 3.09 * 3.51

* Indicates statistically different than the other two instruction
modes at p<.05

Table 2: Estimation of Equation (2)

Variable Coefficient t-statistic

Intercept -43.4826 -2.04 *
ABILITY 0.0315 3.99 *
HW 0.9466 4.16 *
NET -6.1551 -4.34 *
HYBRID -2.0131 -1.77
AGE 0.1045 0.87
FOREIGN 1.1212 0.55

Notes: R-square = .55, F = 26.68, * p<.05, and n = 302.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有