A student generated expressive writing program and the principles of finance course.
Willey, Thomas ; Willey, Liane Holliday
INTRODUCTION
Beginning finance courses, comprised largely of business majors,
are typically challenged by a lack of background in the field, a complex
and analytical subject matter and often a lack of interest and
motivation (particularly for those who are not finance majors). To
further complicate matters, beginning principles classes are frequently
taught in large lecture rooms and to large numbers of students, thus
decreasing the possibility of discussion, student/teacher interaction,
and consequently, learning, interest and motivation. In an effort to
diminish the effects of these variables, the authors of this study
sought to make improvements in instructional strategies. Specifically,
the authors empirically investigated the effects of a student generated
expressive writing program on overall performance in a Principles of
Finance class.
Writing in the classroom is not new, in fact, students have
probably always taken written notes as a way to record teacher directed
content information. But in recent years, research in the field of
comprehension has found that the act of writing to learn information can
be far more sophisticated than simply jotting down what the teacher has
said. In a student generated expressive writing program learners are
taught and encouraged to write out any questions they might have,
concepts they need to review, and any additional supportive examples or
elaborations they can think of. When they do these types of writings,
they are more apt to apply critical thinking skills and a higher order
of understanding to their general knowledge base. Apparently, this is
the case because student generated expressive writing: 1) focuses
thought; 2) makes thoughts available for introspection and further
analysis; 3) translates mental images into more concrete pictures; and
4) motivates inter-communication when the teacher responds to the
students writing (see Britton, et. al. (1975), Emig (1977) and Draper (1982)). While the first three factors serve as an intrapersonal study
strategy, wherein students work and rework their notes and their
confusing and clarifying thoughts as they relate to those notes, the
last factor serves to build an interpersonal relationship between the
instructor and the students. All of these factors work in combination to
build a schema of better understanding and hopefully more motivation to
do well in the course, because the student no longer feels isolated.
Though research in writing-to-learn has found its' way across
most curricula, (see Fulwiler (1980) and Willey (1989)), it is still
very new in quantitative courses. A review of literature found little
theoretical and no empirical research related to finance courses and
writing-to-learn factors. Dahlquist (1995) suggests properly designed
and implemented writing projects can increase the communication skills
and the overall learning of finance students. Templeton (1996) presented
an overview of the potential benefits and costs of using writing
exercises in a finance course. Ciccotello and Green (1997) investigated
the impact written case analysis had on undergraduate and graduate
finance students. Using survey results, the two authors found
undergraduates viewed case writing less favorably than graduate
students, citing that the process was more time consuming and frustrated than expected. However, the majority of the empirical research has been
done studying the effects of writing in statistics and mathematics.
Specifically, Watson (1980) found that students who regularly wrote
in "learning logs" or journals showed improved ability in
quantitative problem solving. And Smith, Miller and Robertson (1991)
found students who completed regular writing assignments in a statistics
class showed improvement in their learning when compared to a
non-writing control group. In light of these positive findings, it seems
important to continue the investigation of the effects of writing on
students' achievement in a Principles of Finance course. This
article empirically examines the performance on exams of a student
generated expressive writing program versus a non-writing control class.
METHODOLOGY
Fifty-seven undergraduate students enrolled in two Principles of
Finance classes at a regional midwestern college composed the sample
groups of this study. There were thirty-three students in the
experimental group and twenty-four in the control group. The same
instructor taught both sections during normal daytime class meetings.
Following the completion of each chapter, the students in the
experimental group were directed to write answers to the following
questions:
1 Pretend you are teaching this class. Prepare an explanation of the
major concepts from this chapter.
2 What concepts are unclear to you?
3 What concepts would you like to review?
The papers were then collected and reviewed by the instructor, who
then made brief, but appropriate comments in one or more of the
following domains: good job, needs improvement, seek further tutoring
assistance. The papers were returned by the following class period.
Also, the instructor took the opportunity to record major classroom
errors in comprehension and adjust subsequent lectures to correct or
review missed or confused areas. To facilitate completion of this work,
the students were told that they would receive participation points
totaling 10% of the overall grade, if they completed each writing
assignment. The students in the control group received traditional
lecture-based instruction with 10% of their total grade derived from
problem sets.
HYPOTHESES
Six principal hypotheses were tested:
H01 There is no difference between the mean G.P.A. for the
experimental and control groups.
The first hypothesis tests whether the difference in the mean
G.P.A.s is statistically different from each other. A statistically
significant difference in these values will imply that the samples are
not similar, and the results could be biased due to non-homogeneous
groups.
H02: There is no interaction effect between writing assignment and
student quality on exam scores.
This hypothesis tests for the existence of an interaction effect,
which if present, causes the primary impact of the writing process to be
unclear. If the null hypothesis is not rejected, the research can focus
on the main treatment effect of the writing program.
H03-06: There is no difference between the overall performance on
exam scores between the control and the experimental groups.
These four hypotheses will gauge the effectiveness, or lack of
effectiveness, of the student generated expressive writing program as
measured by the difference in exam scores.
Table 1 shows the summary statistics of the four exams for the
experimental and control groups. The mean student beginning of the
semester G.P.A., which is expected to be positively correlated with
student performance (Chan, Shum and Wright (1997); Heck and Stout (1998)), showed the average G.P.A. for the experimental group to be
3.54% higher than that of the control group (2.926 versus 2.826). The
degree of dispersion, as measured by the standard deviation of average
G.P.A., between the two groups was within 1.76% of each other (0.578 and
0.568 for the experimental and control groups, respectively). The
coefficient of variation (CV), a relative measure of dispersion, for the
experimental group was 0.198, as compared to the CV for the control
group of 0.210, which shows the variability of the G.P.A. for the two
groups to be essentially the same. These summary results indicate an
approximately identical student quality composition, as measured by
G.P.A., in each group.
The mean exam scores were very similar for the four exams. Exam 1
covered the finance function, financial statements, cash flow and ratio
analysis. Exam 2 included the time value of money and valuation. Exam 3
covered risk and return and capital budgeting. The topics on Exam 4 were
short-term and long-term financing and the cost of capital. The
differences between the experimental and control group means were +0.19
for Exam 1, -3.10 for Exam 2, +3.77 for Exam 3 and -0.49 for Exam 4.
Further hypotheses testing (H03-06) will gauge whether these differences
are statistically significant from each other.
The amount of absolute dispersion for three of the four exam scores
was very small between the two groups; 1.27 for Exam 1, 0.74 for Exam 3,
and 0.24 for Exam 4. Additionally, the CVs between the two groups for
the above three exams are virtually identical. The experimental and
control CVs for Exam 1 were 0.09 and 0.11, Exam 2 both groups were 0.20
and Exam 4 both groups had a CV of 0.17. However, Exam 2 scores for the
experimental group showed more variability with a standard deviation of
15.66 and a CV of 0.23 as compared to 10.03 for the control group and CV
of 0.14, an absolute value difference of 5.63.
RESULTS OF ANOVA TEST FOR G.P.A. (H01)
The purpose of the one-way analysis of variance (ANOVA) is to test
for the equality of mean G.P.A. score between the experimental and the
control group. The results of this test allow further testing of the
remaining hypotheses. Since the variance ratio (F) as reported in Table
2 is 0.42 (p value of 0.52), the null hypothesis should not be rejected.
This indicates that the variation in G.P.A. for both the experimental
and control groups is not statistically different from each other. In
fact, the variance ratio is close to one, meaning that the amount of
variation between the experimental and control groups is approximately
equal. The significance of this test is that if, for example, the mean
G.P.A. for the control group had been higher than that of the
experimental group, then interpreting H03-06 would be very uncertain or
ambiguous. For example, the difference in performance on exam scores
could have been a result of the higher mean G.P.A. of the control group.
TESTING FOR AN INTERACTION EFFECT (H02)
This hypothesis was examined using the multiple regression
equations used to test for H03-06. Using these equations, a
statistically significant result for the interaction variable (Dummy *
G.P.A.) would indicate an interaction effect. Results of the four
regression equations showed no existence of an interaction effect
(Tables 3 through 6). The p values for the exams were, respectively,
0.443, 0.258, 0.302 and 0.772. Therefore, the null hypothesis (H02) was
not rejected for any of the four exams. This allows a clearer
interpretation of the main treatment effects of the writing program. If
the null hypothesis was rejected, indicating a statistically significant
interaction term, most of the interpretation would fall on the
interaction effect, as opposed to the main treatment effect.
TESTING FOR THE EFFECT OF THE WRITING PROGRAM (H03-06)
Table 3 shows that, with the exception of G.P.A., none of the
independent variables were statistically significant as predictors of
test performance on the first exam. G.P.A. was significant at a p value
of .001, meaning that teaching style, student expressive writing or the
lecture only method of teaching, did not explain performance on first
exam, only G.P.A. The positive sign on the G.P.A. coefficient in the
equation shows that the higher (lower) the G.P.A, the better (worse) the
exam score. The Adjusted R2 indicates that 24% (F value = 6.88 and p
value of 0.001) of the variability of test scores can be explained by
the model. In other words, 76% of the variation in scores on Exam 1 was
due to other important, but unmeasured, factors. One possible variable
for inclusion might be the time of day the class is taught.
Results of Exam 2, as shown in Table 4, were similar to those of
Exam 1. The G.P.A. variable was the only statistically significant
factor in performance (p value = 0.086) and the explanatory power of the
model declined slightly from 24% to 22%. The dummy variable for the
writing process and the interaction effect variable were not
statistically significant.
Table 5 shows that Exam 3 results deviated from the pattern of the
first two exams. On this test, none of the independent variables were
statistically significant. A possible explanation of the relative poor
performance of the model (Adjusted R2 of the regression model of 6.7%),
may be due to the content covered on the exams. Two of the traditionally
more difficult subject areas in the course, the risk-return tradeoff and
capital budgeting, were included on this exam and may have contributed
to the lower results.
Table 6 shows the regression analysis results for Exam 4 reverting to the pattern of the first two exams. The p value for G.P.A. was .015,
with the dummy and interaction variables showed no statistical
significance (p values of .697 and .772, respectively). The independent
variables explained 21.5% of the variation in the Fourth Exam score.
DISCUSSION
Results of this study were disappointing, but could be at least be
partially attributed to a series of uncontrollable extrinsic variables.
First, sample sizes were compromised when fewer students than expected
enrolled in the two groups, and more students than expected dropped the
control group course. As a result, there was a limit on the number of
variables that could have been added to improve the model. This was a
very unfortunate occurrence, because of the nature of the treatment and
the difficult level of the course. The inclusion of additional
independent variables such as: 1) ACT/SAT scores; 2) Gender; 3) Major
field of study; 4) Indication of attitudes about writing; may have led
to more promising results. Second, there existed an internal validity threat from the possibility of diffusion effects. Since the experimental
group had their class at 9.30 a.m. and the control group had their class
at 2.00 p.m., it was possible for diffusion of knowledge gained from
writing assignments and of test questions to have taken place.
Despite the statistically insignificant results, research in this
area should be continued. Intuitively, it would seem that any treatment
that encourages high level thinking should be effective. There is a
distinct possibility that a student expressive writing program should
work, unless students are intrinsically opposed to writing and
extrinsically motivated to feel it is simply too much extra work.
Admittedly, there is little professors can do to overcome the former
concern, but perhaps the latter thought could be reduced. Some possible
strategies are the use of extra-credit points with the assigned writings
or devoting a portion of class time to work with peers and/or the
instructor in completing the assignment. If the above concerns are
addressed in future research, we contend that the program has the
potential to be an effective tool for increasing student learning.
REFERENCES
Bloom, B. (1980). The new direction in educational research:
Alterable variables. Phi Delta Kappan, (February), 382-385.
Britton, J., T. Burgess, N. Martin, A. McLeod & H. Rosen
(1975). The Development of Writing Abilities, London: Macmillan, 11-18.
Chan, K. C., C. Shum & D. J. Wright (1997). Class attendance
and student performance in principles of finance. Financial Practice and
Education, 7(2), 58-65.
Ciccotello, C. S. & S. G. Green (1997). Student-authored case
studies in finance: Performance and observations. Journal of Financial
Education, 23(Spring), 55-60.
Dahlquist, J. (1995). Writing assignments in finance: Development
and evaluation. Financial Practice and Education, 5(1), 107-112.
Draper, V. (1982). Formative writing: Writing to assist learning in
all subject areas. In Gerald Camp (Editor), Teaching writing: Essays
from the bay area writing project, Berkeley, CA: Boynton/Cook, 148-183.
Emig, J. (1977). Writing as mode of learning. College Composition
and Communication, 28, 122-128.
Freeman, M. & M. Murphy (1990). The write thing in the
mathematical sciences. Mathematics and Computer Education, 24(2),
116-121.
Fulwiler, T. (1980). Journals across the disciplines. English
Journal, 69(9), 14-19.
Ganguli, A.B. (1989). Integrating writing in developmental
mathematics. College Teaching, 37(4), 140-142.
Geeslin, W. E. (1977). Using writing about math as a teaching
technique. Mathematics Teacher, 70(2), 112-115.
Heck, J. L. & D. E. Stout (1998). Multiple-choice vs.
open-ended exam problems: Evidence of their impact on student
performance in introductory finance. Financial Practice and Education,
8(1), 83-101.
Howard, J. (1984). Recognizing writing as the key to learning.
Education Week, (September 5), 12-13.
Smith, C.H., D. M. Miller & A. M. Robertson (1991). Using
writing assignments in teaching statistics: An empirical study.
Mathematics and Computer Education, 25(1), 21-33.
Templeton, W. K. (1996). Beyond the numbers: Writing to learn
finance. Journal of Financial Education, 22 (Spring), 56-60.
Watson, M. (1980). Writing has a place in a mathematics class.
Mathematics Teacher, 73 (October), 518-519.
Willey, L.H. (1989). The effects of selected writing-to-learn
approaches on high school students' attitudes and achievement.
(Doctoral Dissertation, Mississippi State University, 1988).
Dissertations Abstracts International, 49.
Thomas Willey, Grand Valley State University
Liane Holliday Willey, Grand Valley State University
Table 1. Summary Statistics of G.P.A. and Exam Scores
G.P.A. Exam 1 Exam 2
Experimental
Mean: 2.93 80.61 68.48
Standard Deviation: 0.578 7.38 15.66
Coefficient of Variation: 0.198 0.09 0.23
Control
Mean: 2.83 80.42 71.58
Standard Deviation: 0.568 8.65 10.03
Coefficient of Variation: 0.201 0.11 0.14
Exam 3 Exam 4 N
Experimental
Mean: 65.94 71.55 33
Standard Deviation: 13.24 12.21
Coefficient of Variation: 0.20 0.17
Control
Mean: 62.17 72.04 24
Standard Deviation: 12.50 11.97
Coefficient of Variation: 0.20 0.17
Table 2. Analysis of Variance: G.P.A. of Control and Experimental
Group. (n = 57)
Source DF SS MS F Probability
Factor 1 0.138 0.138 0.42 0.520
Error 55 18.130 0.330
Total 56 18.268
Table 3. Exam 1 Regression Model (n = 57)
Dependent Variable Constant Dummy G.P.A.
Exam 1 55.81 6.73 8.71
T-statistics (7.70) (0.70) (3.46)
p values 0.001 0.485 0.001 ***
Dependent Variable Interaction Adjusted [R.sup.2] F-Value
Exam 1 -2.53 24% 6.88
T-statistics (-0.77)
p values 0.443 0.001 ***
Dummy = 1 if experimental group, 0 = if control
G.P.A. = student's grade point average at the beginning of the
semester
Interaction = Dummy * G.P.A.
* Significant at 90% confidence level
** Significant at 95% confidence level
*** Significant at 99% confidence level
Table 4. Exam 2 Regression Model (n = 57)
Dependent Variable Constant Dummy G.P.A.
Exam 2 49.88 -22.96 7.68
T-statistics (3.95) (-1.38) (1.75)
p values 0.001 0.174 0.086 *
Dependent Variable Interaction Adjusted [R.sup.2] F-Value
Exam 2 6.53 22% 6.38
T-statistics (-1.14)
p values 0.258 0.001 ***
Dummy = 1 if experimental group, 0 = if control
G.P.A. = student's grade point average at the beginning of the semester
Interaction = Dummy * G.P.A.
* Significant at 90% confidence level
** Significant at 95% confidence level
*** Significant at 99% confidence level
Table 5. Exam 3 Regression Model (n = 57)
Dependent Variable Constant Dummy G.P.A.
Exam 3 54.63 -14.73 2.67
T-statistics (4.13) (-0.84) (0.58)
p values 0% 40% 0.564
Adjusted
Dependent Variable Interaction [R.sup.2] F -Value
Exam 3 6.23 6.7% 2.34
T-statistics (1.04)
p values 0.302 0.084 *
Dummy = 1 if experimental group, 0 = if control
G.P.A. = student's grade point average at the beginning of the semester
Interaction = Dummy * G.P.A.
* Significant at 90% confidence level
** Significant at 95% confidence level
*** Significant at 99% confidence level
Table 6. Exam 4 Regression Model (n = 57)
Dependent Variable Constant Dummy G.P.A.
Exam 4 44.35 -5.81 9.80
T-statistics (3.94) (-0.39) (2.51)
p values 0.001 0.697 0.015 **
Dependent Variable Interaction Adjusted [R.sup.2] F-Value
Exam 4 1.48 21.5% 6.12
T-statistics (0.29)
p values 0.772 0.001***
Dummy = 1 if experimental group, 0 = if control
G.P.A. = student's grade point average at the beginning
of the semester
Interaction = Dummy * G.P.A.
* Significant at 90% confidence level
** Significant at 95% confidence level
*** Significant at 99% confidence level