Do online students make the grade on the business major field ETS exam?
Terry, Neil ; Mills, LaVelle ; Rosa, Duane 等
INTRODUCTION
Assessment is an explicit obligation of modern academic programs.
The Educational Testing Service's (ETS) exam in business is an
external standardized measure of assessment widely used to assess
undergraduate business programs. Standardized exams like the ETS
business exam offer a convenient tool for benchmarking student general
knowledge compared to students at other schools. Evidence supporting the
correlation between ETS scores and a student's actual business
knowledge is limited but is widely employed as a tool for analysis. The
purpose of this paper is to evaluate the determinants of student
performance on the ETS major field achievement exam with a focus on
students taking multiple business courses in the online environment.
There is little or no literature examining the impact of online courses
on student performance on the ETS exam despite the fact that online and
hybrid instruction have become ubiquitous throughout many college
campuses during the last decade. The results of this study are derived
at a public university located in the Southwestern part of the United
States. The institution is mid-sized with a total enrollment of
approximately 7,500 total students, 1,000 undergraduate business
students, and 350 graduate business students.
The organization of the manuscript is as follows: First, a brief
literature review is put forth. The second section of the manuscript
describes the data and model. The next section offers empirical results
for the determinants of performance on the ETS exam. The final section
offers conclusions and implications.
LITERATURE REVIEW
A vast amount of research exists on the determinants of student
performance on the ETS exam. Mirchandani, Lynch, and Hamilton (2001)
find that two types of variables are related to student performance on
the ETS exam: input variables (SAT scores, transfer GPA, and gender) and
process variables (grades in quantitative courses). They conclude that
the SAT score is a dominant variable explaining most of the variation in
ETS exam scores, although other variables including GPA and gender are
also statistically significant. Black and Duhon (2003) employ a large
sample of 297 students to determine student performance on the ETS exam.
Their regression model reveals that GPA, ACT score, gender, and major
are significant determinants of performance on the ETS exam. Bagamery,
Lasik, and Nixon (2005) find gender, whether students took the SAT, and
grades to be significant determinants of the ETS exam, while location,
age, transfer status, and major are not significant. Bycio and Allen
(2007) contribute to the literature by showing that, in addition to GPA
and SAT scores, student motivation is an important determinant of
performance on the ETS exam. Terry, Mills, and Sollosy (2008) find that
student motivation to perform at a high-level on the ETS exam is
significantly influenced by including the exam score as ten or twenty
percent of the final grade in a business capstone course.
Course formats in business schools today are driven by both student
demand and the desire of schools to use resources in efficient ways.
Attracting students from broader areas is one of the reasons for the
expansion of online course delivery. The nature of course format could
impact ETS scores if one instruction mode is inherently inferior to
another. Three frequently used course formats include the traditional
campus courses, online courses, and newer hybrid courses. Hybrid courses
are taught using a mode of instruction that combines some of the
inherent features of online (e.g., time independence) and campus (e.g.,
personal interaction) environments (Terry, 2007).
Online course offerings in postsecondary schools are growing
rapidly. Postsecondary institutions offering online courses include both
traditional institutions and institutions founded to offer only online
courses. An example of a postsecondary institution founded to offer only
online courses is Capella University. Founded in 1993, Capella currently
has over 19,900 adult learners enrolled in online courses. According to
the U.S. Department of Education, 90 percent of degree-granting
postsecondary institutions offered asynchronous Internet courses in 2001
(National Center for Education Statistics, 2001). Both the numbers of
postsecondary schools offering online courses and the numbers of
students enrolling in online courses are increasing. Jeff Seaman (2007),
chief information officer and survey director of the Sloan Consortium
states, "There were nearly 3.2 million students taking at least one
course online this past fall, up from 2.3 million just last year."
According to Online Nation: Five Years of Growth in Online Learning
(Allen & Seaman, 2007) the growth rate of 9.7 percent for online
enrollments far exceeds the growth rate of 1.5 percent for the overall
higher education student population. (Allen & Seaman, 2007) Brown
and Corkill (2007) indicate that almost two-thirds of colleges and
universities that offer face-to-face courses also are providing graduate
courses via the online environment.
As the numbers of students enrolled in online instruction have
increased, researchers have debated the effectiveness of online
instruction (Bowman, 2003; Fann & Lewis, 2001; Fortune, Shifflett
& Sibley, 2006; Gayton & McEwen, 2007; Jennings & Bayless,
2003; Lezberg, 1998; Marks, Sibley & Arbaugh, 2005; Okula, 1999;
Robles & Braathen, 2002; Terry, 2000; Worley & Dyrud, 2003).
Interest in the effectiveness of online instruction as a component of
overall program effectiveness has been driven by the federal government
through requirements of regional accrediting agencies, an international
accreditation association for schools of business, universities where
schools of Business are housed and varied individual stakeholders. As
individual college CEOs examine the role of online learning in meeting a
college's strategic needs, assurance of its effectiveness in the
creation of genuine learning is a critical factor to be considered
(Ebersole, 2008). While the need for assessment is not new, the focus of
assessment as illustrated by the Association to Advance Collegiate
Schools of Business (AACSB) International has clearly intensified
(Pringle & Michel, 2007).
All collegiate business programs are tasked with the ongoing need
for assessment (Bagamery, Lasik & Nixon, 2005; Martell &
Calderon, 2005; Trapnell, 2005). It is important that assessment for
online education be viewed as a system that involves more than just
testing and evaluation of students (Robles & Braathen, 2002).
Traditionally, accrediting bodies were focusing primarily on input
measures (Peach, Mukherjee & Hornyak, 2007). Input measures could
reflect characteristics of the students who attended the business
program (Mirchandani, Lynch & Hamilton, 2001) or organizational
factors such as the institution's reputation, faculty-student
ratio, or number of faculty with terminal degrees (Peach, Mukherjee
& Hornyak, 2007). For collegiate business programs aspiring to meet
or maintain the standards of accreditation established by AACSB, this
requires the schools of Business have program learning goals and utilize
direct measures that reflect student demonstration of achievement of
these goals (Martell, 2007; Pringle & Michel, 2007). As schools of
business have developed and rapidly expanded their online course
enrollments, assuring that student learning in the online format is at
least equivalent to the level of learning taking place in traditional
classroom courses could be a useful component of meeting assessment
requirements.
DATA AND MODEL
The purpose of this section is to develop an empirical model that
can test student performance on the ETS exam. Davisson and Bonello
(1976) propose an empirical research taxonomy in which they specify the
categories of inputs for the production function of learning. These
categories are human capital (admission exam score, GPA, discipline
major), utilization rate (study time), and technology (lectures,
classroom demonstrations). Using this taxonomy, Becker (1983)
demonstrates that a simple production function can be generated which
may be reduced to an estimable equation. While his model is somewhat
simplistic, it has the advantage of being both parsimonious and
testable. A number of problems may arise from this research approach
(Chizmar & Spencer, 1980; Becker, 1983). Among these are errors in
measurement and multicollinearity associated with demographic data.
Despite these potential problems, there must be some starting point for
empirical research into the process by which business knowledge is
learned.
The choice as to what demographic variables to include in the model
presents several difficulties. A parsimonious model is specified in
order to avoid potential multicollinearity problems. While other authors
have found a significant relationship between race or age and learning
(Siegfried & Fels, 1979; Hirschfeld, Moore, & Brown, 1995), the
terms are not significant in this study. A number of specifications are
considered using race, age, work experience, and concurrent hours in
various combinations. Inclusion of these variables into the model
affected the standard errors of the coefficients but not the value of
the remaining coefficients. For this reason they are not included in the
model. University academic records are the source of admission and
demographic information because of the potential biases identified in
self-reported data (Maxwell & Lopus, 1994).
The model developed to analyze student learning relies on a
production view of student learning. Assume that the production function
of learning business concepts via the ETS exam can be represented by a
production function of the form:
(1) [Y.sub.i] = f([A.sub.i], [E.sub.i], [D.sub.i], [X.sub.i]),
where Y measures the degree to which a student learns, A is
information about the student's native ability, E is information
about the student's effort, D is a [0, 1] dummy variable indicating
demonstration method or mode, and X is a vector of demographic
information. As noted above, this can be reduced to an estimable
equation. The specific model used in this study is presented as follows:
(2) [SCORE.sub.i] = [B.sub.0] + [B.sub.1][ABILITY.sub.i] +
[B.sub.2][GPA.sub.i] + [B.sub.3][NET.sub.i] + [B.sub.4][TRANSFER.sub.i]
+ [B.,sub.5][FOREIGN.sub.i] + [B.sub.6][GENDER.sub.i] +
[B.sub.7][GR10.sub.i] + [B.sub.8][GR20.sub.i] + [u.sub.i].
The dependent variable used in measuring effectiveness of student
performance is percentile score (SCORE) on the ETS exam. Descriptive
statistics of all variables employed in the model are presented in Table
1. The ETS exam is administered to senior business students in the
research cohort enrolled in the undergraduate capstone strategic
management course. The mean percentile score for the research cohort is
the 48.49 percentile with a standard deviation of 28.07. The ETS score
at a mean of approximately the 50th percentile combined with a large
standard deviation of both very good and relatively poor student
performances yields a research cohort that is very representative of a
typical regional business program.
The student's academic ability (ABILITY) is based on the ACT
entrance exam or SAT converted to ACT equivalency. The average ACT score
for the research cohort is 21.04 (equivalent to 1020 on the math/reading
SAT or 1550 on the 2400-point SAT). The ABILITY variable via the ACT
exam is used as a proxy of student innate ability before entering the
university. Student ability as measured by the ACT exam is expected to
have a positive impact on ETS score.
Grade point average (GPA) is included in model based on previous
research indicating that grade point average is one of the primary
positive determinants of student performance on the ETS exam. Student
grade point average in the study for the cohort is 2.96 with a standard
deviation of approximately half a grade at 0.49.
Student enrollment in more than one online business course during
the academic program before taking the ETS exam is noted by the
categorical variables NET. The business program in the research study
does not offer a complete undergraduate business degree online but does
offer ad hoc courses via the online instruction mode. Forty-three
percent of the students in the research cohort completed multiple
business courses via online instruction. The NET variable is expected to
have a negative impact on ETS scores given the online environment is
still developing as an instructional mode relative to the traditional
chalk and talk of the classroom.
The variable TRANSFER is included in the model as a demographic
variable controlling for students that completed at least twenty-five
percent of their undergraduate education at an another institution. Over
forty-five percent of the students in the research cohort are classified
as transfer students with the majority transferring from a junior
college. The transfer variable is expected to have a negative impact on
ETS score as business core classes in economics, accounting, and
business law at a junior college are not expected to meet the rigor of
the courses at a university.
The demographic variable FOREIGN is included in the study to
separate international students from domestic students. International
students are often recruited to diversify the campus environment and
raise the level of academic standards via performance on standardized
entrance examinations like the ACT or SAT. International students often
face unique language, psychic, and cultural challenges that might negate
some of their innate ability and work ethic. Eight percent of the
research cohort is classified as a foreign student.
The variable GENDER is included in the model based on the finding
of previous researchers (Bagamery, Lasik & Nixon, 2005; Black and
Duhon, 2003; Mirchandani, Lynch & Hamilton, 2001) that male student
performance on the ETS exam is higher than female. The research cohort
for this study is evenly divided between males and females.
The model includes the two student motivation variables, GR10 and
GR20, where GR10 represents the case where percentage score on the ETS
exam counts ten percent of the course grade in the business capstone
course and GR20 applies percentage score on the ETS exam to twenty
percent of the capstone course grade. The effort to tie student
performance on the ETS grade as a motivator is consistent with Allen and
Bycio (1997), but adds the wrinkle of comparing multiple levels of
grading application at both the ten and twenty percent levels. Bycio and
Allen (2007) provide nominal evidence that student motivation is an
important determinant of performance on the ETS exam but their measure
is based on a 4-point scale employing self-reported data without
including a test group versus control group for a course grade
application.
RESULTS
Results from the ordinary least squares estimation of equation (2)
are presented in this section and Table 2. The sample cohort is derived
from students taking the ETS exam from 2003-2007. The total usable
sample size is 136, with 84 students eliminated from the global sample
of 220 because of incomplete information, usually relating to the lack
of ACT/SAT scores (Douglas & Joseph, 1995). None of the independent
variables in the model have a correlation higher than .62, providing
evidence that the model specification does not suffer from excessive
multicollinearity. The equation (2) model explains over 46 percent of
the variance in performance on the ETS exam. Four of the eight
independent variables in the model are statistically significant.
Two of the statistically significant variables are ABILITY and GPA.
The empirical results imply that student score on the ETS exam are
directly related to academic ability measured by the ACT college
entrance exam and academic performance measured by college grade point
average. The statistically significant impact of standardized entrance
exam scores and grade point average is consistent with previous
research. The significance of the ABILITY variable could simply be based
on the observation that students with innate academic ability for
standardized exams perform at a relatively high level on the ETS exam.
The results relating to the ACT exam are somewhat tempered by the
observation that 38% of the students in the initial sample were
eliminated primarily for not having an official ACT/SAT score posted
with the university. The positive and significant impact of GPA on ETS
exam score is anticipated as students with high grades are more likely
to learn and retain core business information than students with a
relatively low grade point average. Consistent with Mirchandani et al.
(2001), overall GPA has a strong internal validity and provides a
measure of student performance related to the curriculum of the school.
The most interesting result from the study revolves around the
variable NET. Holding constant ability, grades, student motivation, and
demographic considerations, students completing multiple business
courses via the Internet (NET) format scored six percent lower on the
ETS exam but the result is not statistically significant (t-stat
of-1.32). The insignificant statistical result implies the online
instruction mode produces a learning environment that is fairly
equivalent to the traditional campus environment. Recent advances in
online instruction tools that make it relatively easy to utilize
streaming video, narrated graphic illustrations, and related
communication instruments have narrowed the quality gap between the
campus and online learning environments. It should be noted that the
lowest ETS scores for students in the online mode were observed in the
first two years of the data set, providing anecdotal evidence for the
hypothesis that recent technological advances have improved the quality
of the online learning environment.
The three demographic variables in the model are not statistically
significant. The TRANSFER variable yielded a surprisingly positive
coefficient but the variable is not statistically significant (t-stat of
0.86). There appears to be little difference in performance on the ETS
exam for transfer students versus native students. The demographic
variable controlling for foreign student performance is positive, with
international students scoring five percentile points higher on the ETS
exam than domestic students, but not statistically significant. The
statistical insignificance of the FOREIGN variable is consistent with
the existing literature. The GENDER coefficient associated with males is
negative but highly insignificant. Unlike previous research, the results
of this study do not find any evidence of a gender differential with
respect to performance on the ETS exam.
The two student motivation variables are both positive and
statistically significant. The results provide evidence that students
are motivated to study and put forth effort on the ETS exam when scores
are applied to the capstone course grade. A ten percent application to
capstone course grade results in a 12.91 increase in the ETS percentile
score and a twenty percent application to course grade results in an
18.1 percentile score increase. The results clearly indicate a
significant student response to the grade motivator but might be
somewhat unique to this research cohort based on the middling mean ETS
score and large standard deviation. It is a mathematical improbability
that a research cohort comprised of students with average ETS scores
well above the 50th percentile would have an equivalent result. The
positive and significant result is primarily applicable to programs that
struggle at or below the 50th percentile on the ETS exam and need to
employ a tangible incentive in order to get students to explicitly put
forth a significant and serious effort on the ETS exam instead of simply
treating it as a required task with little or no direct benefits or
penalties (Allen & Bycio, 1997). The results also imply that a ten
percent grade incentive is strong enough to motivate students to put
forth significant effort, although the twenty percent grade incentive
does yield a coefficient that is five percentile points larger. The
determination of a ten or twenty percent grade motivator should probably
be at the discretion of the course instructor for the capstone course
given that both are significant.
CONCLUSION
This study examines the determinants of student performance on the
ETS business exam at a regional university. Consistent with previous
research, the results find that academic ability measured by the college
entrance exam and student grade point average are the primary
determinants of student performance on the ETS exam. The empirical
results indicate that counting performance on the ETS in a range of ten
to twenty percent as part of the capstone course grade significantly
increases performance on the ETS exam. Gender, transfer student status,
completing courses online, and international student classification do
not appear to have an impact on student ETS exam performance. The
statistically insignificant result associated with the completion of
multiple business courses in the online instruction mode is particularly
interesting as a continuation of the literature examining the
effectiveness of online instruction.
REFERENCES
Allen, I. & Seaman, J. (2007). Online Nation: Five years of
growth in online learning. Needham, MA: Sloan Consortium.
Allen, I. & Seaman, J. (2007). Making the grade: Online
education in the United States, 2006 Midwestern edition. Needham, MA:
Sloan Consortium.
Allen, J. & Bycio, P. (1997). An evaluation of the Educational
Testing Service Major Field Achievement Test in Business. Journal of
Accounting Education, 15, 503-514.
Bagamery, B., Lasik, J., & Nixon, D. (2005). Determinants of
success on the ETS Business Major Field Exam for students in an
undergraduate multisite regional university business program. Journal of
Education for Business, 81, 55-63.
Becker, W. (1983). Economic education research: New directions on
theoretical model building. Journal of Economic Education, 14(2), 4-9.
Black, H. & Duhon, D. (2003). Evaluating and improving student
achievement in business programs: The effective use of standardized
tests. Journal of Education for Business, 79, 157-162.
Bowman, J. P. (2003). It's not easy being green: Evaluating
student performance in online business communication courses. Business
Communication Quarterly, 66(1), 73-78.
Brown, W. & Corkill, P. (2007). Mastering online education.
American School Board Journal, 194(3), 40-42.
Bycio, P. & Allen, J. (2007). Factors associated with
performance on the Educational Testing Service (ETS) Major Field
Achievement Test in Business (MFAT-B). Journal of Education for
Business, 83, 196-201.
Chizmar, J. & Spencer, D. (1980). Testing the specifications of
economics learning equations. Journal of Economic Education, 11(2),
45-49.
Davisson, W. & Bonello, F. (1976). Computer assisted
instruction in economics education. University of Notre Dame Press.
Douglas, S. & Joseph, S. (1995). Estimating educational
production functions with correction for drops. Journal of Economic
Education, 26(2), 101-112.
Ebersole, J. (2008). Online learning: An unexpected resource. The
Presidency: The American Council on Education's Magazine for Higher
Education Leaders, 11(1), 24-26, 28-29.
Fann, N. & Lewis, S. (2001). Is online education the solution?
Business Education Forum, 55(4), 46-48.
Fortune, M. F., Shifflett, B., & Sibley, R. (2006). A
comparison of online (high tech) and traditional (high touch) learning
in business communication courses in Silicon Valley. Journal of
Education for Business, 81(4), 210-214.
Gaytan, J. & McEwen, B. (2007). Effective online instructional
and assessment strategies. American Journal of Distance Education,
21(3), 117-132.
Hirschfeld, M., Moore R., & Brown, E. (1995). Exploring the
gender gap on the GRE subject test in economics. Journal of Economic
Education, 26(1), 3-15.
Jennings, S. & Bayless, M. (2003). Online vs. traditional
instruction: A comparison of student success. The Delta Pi Epsilon
Journal, 45(3), 183-190.
Lezberg, A. (1998). Quality control in distance education: The role
of regional accreditation. American Journal of Distance Education,
12(2), 26-35.
Marks, B., Sibley, T. & Arbaugh, J. (2005). A structural
equation model of predictors for effective online learning. Journal of
Management Education, 29(4), 531-563.
Martell, K. & Calderon, T. (2005). Assessment in business
schools: What it is, where we are, and where we need to go now. In K.
Martell & T. Calderon Eds, Assessment of student learning in
business schools (pages 1-26). Tallahassee, FL: Association for
Institutional Research.
Martell, K. (2007). Assessing student learning: Are business
schools making the grade? The Journal of Education for Business, 82(4),
189-195.
Mawell, N. & Lopus, J. (1994). The Lake Wobegon effect in
student self-reported data. American Economic Review, 84(2), 201-205.
Mirchandani, D., Lynch, R., & Hamilton, D. (2001). Using the
ETS Major Field Test in Business: Implications for assessment. Journal
of Education for Business, 77, 51-56.
National Center for Education Statistics (2001, January 4). Quick
tables and figures: Postsecondary education quick information system,
survey on distance education at postsecondary education institutions,
1997-1998. http://nces.ed.gov
Okula, S. (1999). Going the distance: A new avenue for learning.
Business Education Forum, 53(3), 7-10.
Peach, B., Mukherjee, A., & Hornyak, M. (2007). Assessing
critical thinking: A college's journey and lessons learned. The
Journal of Education for Business, 82(6), 313-320.
Pringle, C. & Michel, M. (2007). Assessment practices in
AACSB-accredited business schools. The Journal of Education for
Business, 82(4), 202-211.
Robles, M. & Braathen, S. (2002). Online assessment techniques.
The Delta Pi Epsilon Journal, XLIV(1), 39-49.
Siegfried, J. & Fels, R. (1979). Research on teaching college
economics: A survey. Journal of Economic Literature, 17(3), 923-969.
Terry, N. (2000). The effectiveness of virtual learning in
economics. The Journal of Economics & Economic Education Research,
1(1), 92-98.
Terry, N. (2007). Assessing the difference in learning outcomes for
campus, online, and hybrid instruction modes for MBA courses. The
Journal of Education for Business, 82(4), 220-225.
Terry, N., Mills, L., & Sollosy, M. (2008). Student Grade
Motivation as a Determinant of Performance on the Business Major Field
ETS Exam. Journal of College Teaching and Learning, 5(6), 20-25.
Trapnell, J. (2005). Foreward. In K. Martell & T. Calderon Eds,
Assessment of student learning in business schools. Tallahassee, FL:
Association for Institutional Research.
Worley, R. B., & Dyrud, M. (2003). Grading and assessment of
student writing. Business Communication Quarterly, 66(1), 79-96.
Neil Terry, West Texas A&M University
LaVelle Mills, West Texas A&M University
Duane Rosa, West Texas A&M University
Marc Sollosy, West Texas A&M University
Table 1: Summary Statistics
Variable Mean Standard Dev.
SCORE 48.49 28.07
ABILITY 21.04 4.31
GPA 2.97 0.49
NET 0.43 0.49
TRANSFER 0.47 0.50
FOREIGN 0.08 0.27
GENDER 0.50 0.50
GR10 0.19 0.39
GR20 0.22 0.42
Table 2: Determinants of ETS Performance
Variable Coefficient (t-statistic)
Intercept -87.304 (4.98) *
ABILITY 3.178 (4.71) *
GPA 19.320 (3.28) *
NET -6.009 (-1.32)
TRANSFER 4.273 (0.86)
FOREIGN 4.981 (0.55)
GENDER -0.269 (-0.06)
GR10 12.9111 (2.02) *
GR20 18.105 (3.17) *
R Square 0.466
F-Value 13.85
Notes: * p < .05 and n = 136.