Using pretest scores to predict student performance--evidence from upper-level accounting courses.
Hayes, Thomas P., Jr. ; Tanner, Margaret
INTRODUCTION
Over the past decade, most public colleges and universities have
seen less of their financial support come from the public domain as
state governments scramble to cover increasing costs. The most recent
economic downturn has only exacerbated this trend. Moreover, schools are
facing increasing scrutiny to produce results in terms of better
retention and graduation rates. Accordingly, administrators and
educators alike are striving to find ways to predict, and ultimately,
improve student performance (Richter, 2006).
Educators have long been aware of certain indicators of
performance. SAT and ACT scores, as well as GPA, have been consistently
found to predict student performance (Grover, Heck, & Heck, 2010;
Doran, Bouillon, & Smith, 1991; Eskew & Faley, 1988; Hunt, 1978;
Astin, 1971). Other factors, including effort (Eskew & Faley, 1988)
and pretesting of prior knowledge (Grover et al., 2010) have also been
found to effectively predict student performance.
Similar to Grover et al. (2010), in the present study, 93 students
in an upper-level cost accounting course were tested at the beginning of
the semester to assess their retention of knowledge from the
introductory course. These pretest scores were included along with
student cumulative GPAs in a model which predicts their success in the
course. Consistent with expectations, the authors found the model did a
good job predicting student performance. In the following sections, the
authors provide a review of relevant literature, the development of the
model and how it was tested, and a discussion of results.
Predicting student performance has been a topic of interest for
educators for many years. Numerous variables, including GPA, SAT scores,
even effort, have been tested as predictors of performance. For example,
it has been known for some time that measures of aptitude, such as SAT
scores (Hunt, 1978; Astin, 1971) and past performance, such as GPA
(Astin, 1971) can predict performance in future courses. Intuitively,
these results make sense. For example, one would expect that past
performance should predict future performance.
More recent research has looked at other factors that may predict
student performance. In their study, Eskew and Faley (1988) developed a
model for predicting student exam performance in an introductory
financial accounting course. In addition to aptitude (SAT scores) and
past performance (GPA), they also included students' effort (in the
form of voluntary quizzes). Their overall model explained a significant
portion (53%) of the variance in examination scores.
Prior research has also examined the value of prior requisite
knowledge in predicting student performance. For example, Borde, Byrd,
and Modani (1998) looked at the relationship between student performance
in accounting prerequisite courses and their performance in an
introductory corporate finance course. Specifically, they found that
those students that did better in their accounting prerequisites also
did better in the finance course.
Grover et al. (2010) hypothesized that student performance in an
introductory finance course also would be impacted by their retention of
prior requisite knowledge. They assessed student retention of said
knowledge using a pretest given at the start of the semester. As
expected, the pretest was found to be a good indicator of course
performance. Specifically, the evidence suggests that retention of basic
knowledge is critical to student success in future courses. Perhaps more
importantly, the authors argue that the pretest also serves as an
effective assessment tool, giving both teachers and students valuable
feedback on areas where students most need to improve.
While intuitively one would expect prior requisite knowledge to be
a good indicator of future performance, results have been mixed. For
example, Marcal and Roberts (2000) found that a computer literacy
prerequisite did not predict student performance in a business
communications class. Similarly, Kruck and Lending (2003) found that
prior related courses did not predict student performance in an
introductory Information Systems (IS) course. A possible explanation for
these mixed findings is the closeness of the prerequisite knowledge to
the content covered in the respective course. For example, while a
business statistics class may be a prerequisite for an upper-level cost
accounting class, it may not necessarily predict student performance as
well as an introductory managerial accounting class, which is also a
prerequisite.
RESEARCH OBJECTIVE AND HYPOTHESES DEVELOPMENT
The objective of this study is to examine predictors of student
performance in an upper-level accounting class. Specifically, this study
hypothesized that a pretest score may serve as an effective predictor of
student performance in a junior-level cost accounting course. Similar to
finance courses, accounting courses require students to retain certain
basic knowledge in order to be successful. Retention of earlier
accounting concepts is particularly important in the upper-level cost
accounting course, where students will use much of the content they
learned in their introductory accounting course. Intuitively, one should
expect that students that retain more of that basic knowledge will
perform better on exams in the upper-level accounting course.
Accordingly, the following hypothesis was formulated:
[H.sub.1]: There is a positive and significant relationship between
the student pretest score and his/her performance in an upper-level cost
accounting course.
Additionally, it is hypothesized that GPA is an effective predictor
of performance. Prior research suggests that past performance (as
measured by GPA) is a good indicator of future performance. Again,
intuitively, one should expect that students' past grades would
predict their grades in future courses. Thus, the following hypothesis
was formulated:
[H.sub.2]: There is a positive and significant relationship between
the student GPA and his/her performance in an upper level cost
accounting course.
Overall, the authors believe that these two variables, GPA and
pretest scores, are the best indicators of student performance in the
upper-level cost accounting course.
METHODOLOGY
Subjects
For this study, data were collected from all students enrolled in a
junior-level cost accounting course at a regional university in the
Mid-South. To ensure an adequate number of subjects, data were collected
over 4 semesters. A test of means of student GPA's across the four
semesters showed no significant differences. The initial sample included
102 students; however, 9 students were removed from the sample because
they dropped the course before the first exam. Ultimately, there was a
usable sample of 93 subjects. Of those, 64 (69%) were females. A test of
means of student GPA's showed no differences across gender.
Measurement of Variables
Two independent variables were employed in the regression model to
predict student performance on exams. The first variable was GPA (GPA)
which should explain a significant portion of the variance in the final
grade (Eskew & Faley, 1988). For all subjects, the authors collected
their cumulative GPA prior to the semester in which they took the cost
accounting course.
The second independent variable employed in the model was the
pretest scores which students earned on a pretest given the first day of
the semester. Essentially, this score represents the knowledge students
retained from their introductory managerial accounting course. The goal
of this exam is twofold. First, it provides the students with feedback
regarding their retention of course content. Second, the results provide
an effective advising tool to help students with those content areas in
which they scored poorly. The questions for this pretest are based upon
the comprehensive examination students took in their introductory
managerial accounting course.
The dependent variable in the model was the simple average of a
student exam scores in the course, which includes the score on the
comprehensive final examination. Although there are other points that
students can earn in the course (e.g., homework points), these points
were minor when compared to their performance on exams. In fact, their
performance on exams was the single biggest determinant in whether they
passed the course. Accordingly, exam scores were used in the regression
model.
RESULTS AND DISCUSSION
Of course, one concern in any regression analysis is that two or
more of the independent variables may be intercorrelated. In the present
study, one might expect that GPA and SCORE are correlated with each
other, such that a student with a higher GPA would generally perform
better on the pretest and vice versa. To address this concern, the
authors examined the part and partial correlations as well as
collinearity statistics when they ran the regression. The part and
partial correlations for both GPAs and scores are almost identical to
their respective zero-order correlations, suggesting that
multicollinearity is not a serious problem. Additionally, the variance
inflation factors (VIF) on both variables are very close to 1.0, which
also suggests that there is little problem with multicollinearity in the
study.
Multiple regression results are presented in Table 1. As noted in
the table, the adjusted R-square shows that the model accounts for 58%
of the variance in exam scores, which is significant (F = 65.283; p <
0.001). In addition, the results show that both SCORE (t = 6.22; p <
0.001) and GPA (t = 7.58; p < 0.001) are significant predictors of
exam performance, thus supporting both hypotheses. No interaction
effects were observed.
Overall, the results are encouraging. Specifically, the evidence
suggests that the model predicts student exam scores in the upper-level
cost accounting course. While this study certainly adds to the
accounting education literature, the present study also confirms
authors' expectations regarding giving students this initial exam.
By giving students a pretest on the first day of the semester, prior
learning can be more readily assessed, and instructors can adjust their
teaching accordingly. Ultimately, the authors expect that adding this
type of exam to the course will provide a useful advising tool to aid
students in the areas where they need it most.
LIMITATIONS AND RECOMMENDATION FOR FUTURE RESEARCH
While care was taken to address all methodological issues, a few
caveats should be noted. First, although the results are encouraging and
are consistent with prior research, one must be cautious in generalizing
the results to other upper-level accounting courses. Future research
should investigate whether this model predicts students' exam
performance in other courses, such as intermediate accounting. Second,
the results may not be generalizable across other university settings
due to the limitations inherent in a convenience sample. Subjects came
from a single university in the Mid-South, so the sample may not be
representative of student populations at other institutions.
Accordingly, future research may want to replicate the current study
using a larger, more diverse sample.
Third, although this model does account for a significant variance
in exam scores, the adjusted R-square of 58% suggests there are other
factors that predict student performance on exams. Without sacrificing
parsimony in the model, it may be worthwhile to consider other
variables. For example, although not the focus of the present study, the
findings from Eskew and Faley (1988) suggest that some measure of effort
would be an important contribution to the model.
There are other factors that may predict students' performance
as well. For example, the amount of time that has lapsed since students
took their introductory managerial class may affect their performance in
the upper-level course. In a more traditional university setting, one
might expect students to take the upper-level cost accounting course in
the following semester after they take their introductory course. The
same may not be true for institutions with large commuter and/or
nontraditional student populations. Consequently, future research should
consider whether the time lapse between the introductory and upper-level
cost accounting courses affects students' exam performance.
Yet another factor that may predict student performance is their
level of confidence about the course material. More specifically,
student overconfidence in his/her retention of content knowledge may be
negatively associated with his/her exam performance. Much research has
been done in this area, including in academic settings where students
were found to be overconfident in their abilities (Clayson, 2005). The
same may not be true in accounting courses, however, where students
perceive the material as more difficult (Sale, 2001). Thus, future
research may want to examine whether students' level of confidence
affects their exam performance.
Despite these limitations, the present study does contribute to the
accounting education literature. Specifically, it adds to the knowledge
of those factors that can predict student performance in upper-level
courses. With this knowledge, accounting educators can intervene early
on in the semester, helping students in the areas where they are
struggling and ultimately ensure higher success rates in their classes.
REFERENCES
Astin, A.W. (1971). Predicting academic performance in college. New
York: Free Press.
Borde, S. F., Byrd, A.K., & Modani, N.K. (1998). Determinants
of student performance in introductory corporate finance courses.
Journal of Financial Education, 24, 23-30.
Clayson, D.E. (2005). Performance overconfidence: Metacognitive
effects or misplaced student expectations? Journal of Marketing
Education, 27, 122-129.
Doran, B.M., Bouillon, M.L., & Smith, C.G. (1991). Determinants
of student performance in accounting principles I and II. Issues in
Accounting Education, 6, 74-84.
Eskew, R.K., & Faley R.H. (1988). Some determinants of student
performance in the first college-level financial accounting course. The
Accounting Review, 63, 137-147.
Grover, G., Heck, J., & Heck, N. (2010). Pretest in an
introductory finance course: Value added? Journal of Education for
Business, 85, 64-67.
Hunt, E. (1978). Mechanics of verbal ability. Psychological Review,
85, 109-130.
Kruck, S.E., & Lending D. (2003). Predicting academic
performance in an introductory college-level IS course. Information
Technology, Learning, and Performance Journal, 21, 915.
Marcal, L., & Roberts, W.W. (2001). Business statistics
requirements and student performance in financial management. Journal of
Financial Education, 27, 29-35.
Richter, A. (2006). Intertemporal consistency of predictors of
student performance: Evidence from a business administration program.
Journal of Education for Business, 82, 88-93.
Sale, M.L. (2001). Steps to preserving the profession. The National
Public Accountant, 46, 810, 19.
About the Authors:
Thomas P. Hayes, Jr. is an Associate Professor of Accounting at the
University of Arkansas - Fort Smith. He teaches Auditing and
Intermediate Financial Accounting. His research interests include
accounting education and auditor decision-making.
Margaret Tanner is an Associate Professor of Accounting at the
University of Arkansas - Fort Smith. She teaches Governmental Accounting
and Advanced Financial Accounting. Her research interests include
accounting education and behavioral accounting research.
Thomas P. Hayes, Jr.
Margaret Tanner
University of Arkansas--Fort Smith
Table 1
Regression Summary
Variable Standardized Beta t * p
GPA 0.530 7.580 0.000
Pretest Score 0.434 6.216 0.000
Overall Model
Adjusted R-square: 58%
F-statistic: 65.283
p = 0.000
* df = 92 for all t values