Online assessment feedback: competitive, individualistic, or ... preferred form!
Bower, Matt
This study investigated the "the effects of receiving the
preferred form of online assessment feedback upon middle school
mathematics students." Students completed a Web-based quadratics equations learning module followed by a randomly generated online quiz that they could practise as often as they liked. The effect of receiving
their preferred form of feedback (either competitive or individualistic)
upon their academic performance and attitude indicators was measured.
The three key findings of the study were that:
i) The facility to practice led to a significant improvement in
test scores
ii) Providing students with their non-preferred form of feedback
had a significantly negative impact on their mathematics ability
self-rating
iii) Boys appeared more likely to adopt a fixated approach to this
"power based" repetitive practise task.
The differential effect of competitive versus individualistic
feedback was also analysed.
**********
INTRODUCTION
The purpose of this study was to investigate "the effects of
receiving the preferred form of online assessment feedback upon middle
school mathematics students."
Specifically, high school students who completed an online
mathematics learning module and quiz system were first asked whether
they preferred to receive performance feedback that compares them to
other people (norm-referenced or "competitive") or to their
own past attempts (self-referenced or "individualistic").
Students then worked through an online quadratic equations learning
module followed by a randomly generated and timed online quiz that they
could practise as often as they chose (formative assessment). At each
attempt all students received corrective and performance feedback, with
approximately one- third of the students receiving their preferred form
of comparative feedback (either competitive or individualistic),
one-third receiving their non-preferred form of comparative feedback,
and one-third receiving no comparative feedback.
Approximately one week later students completed a final quiz
(summative assessment) and completed a post-survey. The pre-survey,
quiz, and post-survey data were then analysed to gauge the effect of
receiving preferred versus non-preferred and competitive versus
individualistic forms of online feedback upon students' performance
and attitude. The data were also analysed ex post-facto to detect other
educationally pertinent results, such as any differences in gender
effects of the experiment.
This research was conducted online using a site specifically
constructed for this experiment. To gain an appreciation for the
instruments and processes utilised in this project (pre-survey,
quadratic equations learning module, randomly generated quizzes, and
post-survey) please visit the site at
http://n2.mpce.mq.edu.au/~mbower/qaf/ (1)
Background
Providing learners with online performance feedback is becoming
more prevalent in educational contexts worldwide. However, concerns
arise over the form of that feedback (either self-referenced,
norm-referenced, or criterion-referenced) and the effects it has upon
students' performance, attribution of academic success, and
self-esteem. The research conducted in this experiment attempted to
determine the effect of receiving differential forms of feedback upon
learner academic performance and attitude.
There has been some encouraging research to date regarding the
effect of Web-based feedback upon students. Sonak, Suen, Zappe, and
Hunter (2002) found a direct positive relationship between the amount of
time that junior high school students used an online performance
feedback system and their academic performance (p. 15). In another
experiment involving 176 first year psychology undergraduates, Cassady,
Budenz-Anders, Pavlechko, and Mock (2001) found significant differences
in performance in the final examination between students who did and did
not take advantage of online formative assessment quizzes (p. 6).
One of the key advantages of online assessment is its capacity to
provide retesting opportunities to promote mastery learning. In their
investigation into the effect of criterion-referenced grading and
retesting opportunities on the performance (and motivation) of first
year psychology students Covington and Omelich (1984) found that
"performance superiority of mastery instruction occurred primarily
because of the retest option, with enhanced motivation due to both
retesting opportunities and criterion-referenced standards" (p.
1038).
However there has always been contention regarding the type of
feedback that students should receive. Historically research into the
effect of competitive (norm-referenced) goal structures versus
individualistic (self-referenced) goal structures upon academic
performance has not been conclusive. Lewis and Cooney (1986, p. 3)
report:
In a meta-analysis of 122 studies of the effects of goal
structures on achievement, Johnson, Maruyama, Johnson, Nelson, and
Skor (1981) reached three broad conclusions: ... (3) that
competitive and individualistic structures do not have significant
differential effects on achievement. Other reviewers have reached
different conclusions (see Hayes, 1976; Slavin, 1977). While most
reviewers conclude that competitive and individualistic goal
structures do not produce differential effects on achievement.
Since then resolution has not been reached. Some researchers have
argued in favour of a competitive approach to feedback. Becker and Rosen
(1992) employed cost/benefit stochastic modelling to advocate
"competition among students does stimulate academic effort provided
students are appropriately rewarded for achievement" (p. 108),
discounting competency-based grading as a less effective assessment
approach to promote academic performance. Lam, Yim, Law, and Rebecca
(2001) found that a competitive environment during a 2-hour Chinese
typewriting course lead to significantly better performance in easy
tasks compared to students in a non-competitive environment, supporting
the idea that competitive goal structures can enhance academic
achievement.
In contrast to this, other evidence has suggested that competition
leads to negative student outcomes, as compared to an individualistic
focus. In the same typewriting course Lam et al. (2001) noted that
students placed in a competitive environment were "more likely to
sacrifice learning opportunities for better performance," (p. 1).
They point out that in competition "students seek positive
judgement of competence by outperforming others. To achieve this end,
they may avoid challenge when they are not sure of winning," (p.
18).
There have been notable differences in attribution of success under
competitive versus individualistic goal structures. Lewis and Cooney
(1986, p. 4) commented that "competitive goal structures seem to
foster ability attributions for success and failure. In contrast
individualistic reward structures are more likely to result in effort
attributions."
The problem with fostering ability attributions under competitive
goal structures is that it can have an impact on student self-concept.
Covington and Omelich (1984, p. 1039) cite Feldman and Ruble, Levine,
and Veroff to argue that "competition raises student's doubts
about their ability by directing their attention to social comparison
information." This could potentially have long term negative
effects on the learner, particularly less able students. Nicholls, cited
in Lewis and Cooney (1986, p. 4) suggests that "social comparison
for low achievers may be predicted to lead to the maintenance of a low
self-concept of ability and, thus, low motivation."
The issue regarding which form of goal structure (feedback) should
be implemented revolves around the fact that different forms of feedback
are appropriate for different students. For instance, Covington and
Omelich (1984, p. 1040) cite researchers, Bloom, and Born and Zlutnick,
who conclude that "slow learners will profit more from a
task-oriented structure than will fast learners." This raises the
question as to who should decide which form of feedback students should
receive, and on what basis.
ABOUT THIS STUDY
The research conducted in this study investigated whether allowing
students to choose their preferred form of feedback significantly
affected academic performance or student attitudes towards learning.
The experimental design utilised in this project has drawn from
Lewis and Cooney's (1986) study, to act as a means for comparison
and a point of contrast. In 1986, Lewis and Cooney studied 52 fourth and
fifth grade students who were each randomly assigned to three groups:
competitive feedback, individualistic feedback, and no feedback
(control). Students received differential performance feedback regarding
their accomplishment in two 40-minute computer-assisted mathematics
sessions per week over a 6-week period based on these groupings.
Lewis and Cooney (1986) note that the major finding of their
research was that the feedback conditions were found to differentially
affect male and female performance, despite the fact that there was no
significant effect of feedback method upon attribution of success to
effort or academic locus of control for either males or females
(contrary to the previous research they cite and their own
expectations). In their study not only did males exhibit a significantly
higher rate of progress than females within the competitive feedback
group, but females within the individualistic feedback group exhibited a
significantly higher rate of progress than females in the competitive
feedback group and control group males (p 15). This is shown in Figure
1.
[FIGURE 1 OMITTED]
A key difference between Lewis and Cooney's study and the
experiment performed in this study was that students in the 1986
experiment had performance and attitudinal indicators broken down by
gender rather than feedback preference. This research project attempted
to discover whether grouping students by their self-professed preferred
form of feedback rather than grouping all males in the competitive
feedback group and all females in the individualistic feedback group
would reveal a stronger relationship to the performance and attitudinal
measures.
Offering students their preferred form of feedback regardless of
gender was conjectured to be a more successful approach to improving
academic performance, student confidence, and learning outcomes
generally as opposed to expecting that all males would respond better to
competitive feedback and all females would respond better to
individualistic feedback. This tailored approach is particularly
relevant in today's online environment where providing all students
with their preferred form of feedback is entirely possible.
If different feedback conditions do have significant effects upon
academic performance and student attitudes towards learning then
teachers will need to shift their emphasis from providing students with
a fixed form of feedback to guiding students towards the form of
feedback that is best for their personal growth. Educators can also
focus upon teaching students about the different performance and
attitudinal effects of competitive versus individualistic goal
structures that have been widely documented. Ames & Ames, Covington,
and Nicholls, are cited in Omelich and Covington (1984, p. 1047). In
this way students can become better managers of their own learning.
In summary, the online medium can be utilised to provide students
with choice over the type of feedback they receive, which may in turn
affect their academic performance and learning attitude. By surveying
students and monitoring them during an online learning module and
repeated practice quiz, this experiment attempted to detect such
effects.
METHOD
Instrument Design and Construction
The "Quadratics-Are-Fun" Web site
(http://n2.mpce.mq.edu.au/~mbower/qaf/) was constructed specifically for
this experiment to provide data on performance and attitudinal effects
of differential assessment feedback conditions.
The task of solving quadratic equations was chosen for the purposes
of this exercise because it required some personal construction rather
than mere recollection of facts and thus represented an activity that
students could practise repeatedly without feeling as though they were
covering exactly the same content. A quadratic equations quiz allows
questions of similar form but different values to be constructed, thus
providing some form of consistency between the difficulty level of
different quiz attempts.
The module was designed to be as autonomous as possible in an
attempt to reduce differences between the learning experiences of
different classes. Students received preliminary instructions on the
main page of the site that contained all necessary information to allow
students to execute the module (see Figure 2.)
[FIGURE 2 OMITTED]
[FIGURE 3 OMITTED]
Before students commenced the experiment their details were
collected online, including information such as their, age, gender,
school, grade, and state/province (see Figure 3). Also, students
completed a pre-survey (see Figure 4) to ascertain their disposition
towards online learning and mathematics.
[FIGURE 4 OMITTED]
The pre-survey consisted of 10 questions:
1. Do you have Internet access at home? (Yes/No response)
2. How many hours per week (on average) do you use the Internet?
(text-field response box requiring a positive number)
3. How much do you enjoy learning from the Internet? (eleven point
Likert scale)
4. How would you rate your mathematical ability? (eleven point
Likert scale)
5. How would you rate the effort you make in mathematics? (eleven
point Likert scale)
6. How much do you think ability contributes to success in
mathematics? (eleven point Likert scale)
7. How much do you think effort contributes to success in
mathematics? (eleven point Likert scale)
8. Have you ever factorised quadratic expressions before? Quadratic
expressions are ones like [x.sup.2] - 6x + 8.
9. Have you ever solved quadratic equations before? Quadratic
equations are ones like [x.sup.2] + 2x - 15 = 0.
10. Please consider the following statements carefully and then
select one of the two options.
For my performance in mathematics skills tasks....
I prefer to receive feedback about how I compare to other students
OR
I prefer to receive feedback about how I compare to my own past
performances.
The final question was used to allocate students to experimental
groups.
After submitting their details and their pre-survey responses,
students commenced the learning module, which consisted of a 10-page
instructional sequence outlining a procedure for solving monic quadratic
equations. Randomised interactive online guided practise was provided on
several of the pages. (Figure 5 shows an example of the type of
interface employed.)
Upon completion of the learning module students were prompted to
attempt the practice quiz. All students received an identical interface
to the online quiz, so as not to bias performance between feedback
groupings (see Figure 6).
[FIGURE 5 OMITTED]
[FIGURE 6 OMITTED]
The site was designed to randomly allocate students to either of
the competitive, individualistic, or neutral feedback groups in a
pinwheel fashion based on their feedback preference and order of account
creation. For instance, three consecutive students who selected a
"competitive" feedback preference would be allocated to the
competitive, individualistic, and neutral feedback groups respectively.
This approach ensured the most even distribution of feedback preference
types to feedback allocation groups, which in turn provided the best
basis for statistical analysis in later phases of the experiment.
* Students who were allocated to the competitive feedback group
received the feedback mentioned above as well as: i) their performance
ranking compared to their peers, ii) the best performance in the group,
and iii) the average performance of the group (see Figure 7).
Performance comparisons in the competitive feedback group only related
to other students within the competitive feedback group, not the entire
student cohort. A rank order comparison was chosen in order to stimulate
social comparison within the competitive feedback group without the
necessity of direct comparison to the performance of individual
subjects. The ranking was based on a descending order sort by score
followed by an ascending order sort by time.
* Students in the individualistic fedback group received feedback
information in a similar format to that of the competitive feedback
group--except that their test score ranking, average score, and best
score were presented in relation to their own past performance rather
than to the performance of their peers (see Figure 8). This also
corresponds to the experimental approach adopted in Lewis and
Cooney's experiment (1986, p 9).
* Finally the neutral (control) group was not exposed to any
comparative feedback although students still received their test score
and the time taken to complete the quiz.
[FIGURE 7 OMITTED]
[FIGURE 8 OMITTED]
One week after students first registered, they were required to
return to the "final quiz" section of the site
(http://n2.mpce.mq.edu.au/~mbower/qaf/final/). This section was used to
collect information about how much each student improved over a time
period of approximately a week and to direct the student to the
post-survey.
The final quiz page was constructed with an identical interface
(apart from the heading) to the practice quiz so as not to confuse or
distract students. Upon submitting their final quiz attempt students
were given the same form of performance feedback that they had received
during the practice quizzes (refer to Figure 9).
[FIGURE 9 OMITTED]
After they had reviewed their performance in the final quiz,
students were directed to a post-survey consisting of the following 11
questions:
1. How much do you now enjoy learning from the Internet? (eleven
point Likert scale)
2. How do you rate your mathematical ability? (eleven point Likert
scale)
3. How do you rate the effort that you make in mathematics? (eleven
point Likert scale)
4. How much do you think ability contributes to success in
mathematics? (eleven point Likert scale)
5. How much do you think effort contributes to success in
mathematics? (eleven point Likert scale)
6. How much did you enjoy studying this unit compared to the usual
way that you learn mathematics? (eleven point Likert scale)
7. How effective do you think this unit was compared to the usual
way that you learn mathematics? (eleven point Likert scale)
8. In the last survey, you were asked to choose whether you prefer
to receive feedback about how you compare to other students OR how you
compare to your own past experiences. Which method did you chose and
why? (open ended)
9. What were the best things about the online quadratics module?
(open ended)
10. What were the worst things about the online quadratics module?
(open ended)
11. Any other comments (open ended).
Items one through to five replicated questions in the pre-survey,
thus allowing the effect of the module and quiz upon these indicators to
be measured.
Submitting the post-survey was the final task required of the
participant in this experiment.
DATA COLLECTION PROCESS
The data collection phase of this experiment took place over the
period of 11th August to 26th September 2003.
The module was constructed in HTML with all randomised components
constructed using JavaScript. A MySQL database was used to store and
manage all user account, survey, and quiz response data; the PHP scripting language was used to process and distribute all information
submitted to the site. (2)
For each student the experiment consisted of three temporal phases:
1. An initial lesson during regular class time where students
submitted their details and pre-survey responses, attempted the learning
module, and then progressed to the practice quiz.
2. A period of approximately 1 week where students could practise
the quiz as often as they liked from home or at school.
3. A final lesson during regular class time where students
attempted the final quiz and completed the post-survey.
A support document providing instructions for conducting the
experiment was issued to all participant schools in an attempt to
standardise the data collection process. This document also provided
advice for implementation and emphasised the importance of encouraging
students to practise at home (3) or at school.
For the classroom lessons students accessed the module via the
school computer laboratories or their laptops. The role of the teacher
was limited to classroom management and, where necessary, responses to
student questions. Also, the experimental design permitted students to
help one another through the learning and guided practice phase of the
module. This was not deemed to significantly bias results since all
three experimental groups were present within each class.
RESULTS
A total of 806 students (F=361, M=445) across nine schools
registered on the "Quadratics-Are-Fun" site. Student ages
ranged from grades 8 to 11 (with a mean age of 13.7 years); they were
from nine different schools (4) which included a mix of coeducational and single sex, public and private schools.
The data were trimmed to only include participants who:
* Completed all (non-descriptive) pre and post survey questions
* Responded that they were in grade 8 to 11
* Had at least one attempt at the practice quiz
* Completed the final quiz
* Had no practice quiz or final quiz attempts that took less than
13 seconds (this was seen as a non-serious attempt)
* Attempted the final quiz more than 24 hours after their first
attempt at the practice quiz.
This resulted in a trimmed sample of 191 (F=87, M=104) participants
who in total made 1609 attempts at the quiz.
Of these 191 students, 66% responded that they preferred to receive
feedback that compared them to other people (competitive preference) and
34% selected a preference for feedback that compared them to their own
past performances (individualistic preference). These proportions were
closely preserved within gender groups (see Table 1).
The numbers in each feedback preference/allocation cell for the
trimmed dataset, after students had been randomly placed in their
feedback allocation groups, is provided in Table 2.
The experiment produced significant results across three key
measures:
1. Test score
2. Mathematics ability self-rating, and
3. Number of practice attempts.
Note that for all statistics that follow below, all scores are
based on 10 and all t-tests are two-sided.
Key Finding 1: The facility to practise lead to a significant
improvement in test score.
The mean quiz score for the entire dataset rose significantly as a
result of the ability to practice; it rose from an initial score of 3.75
out of 10 to a final quiz score of 5.81 out of 10 (Z = 7.631,
p<0.0000) (5). See Figure 10 for a graph representing the mean
improvement by each feedback allocation group.
All combinations of feedback preference and feedback allocation had
a significant increase in mean quiz score except the students who
initially indicated a preference for competitive feedback and were
allocated to the competitive feedback group. When the latter group was
tested for a significant difference between initial test score and final
quiz score, the resulting parameters were t = 1.196, p = 0.238 (df =
44).
Also, there was a significant difference between the increase in
test score performance between the competitive preference and
individualistic preference students who were placed in the competitive
feedback group (t = 2.198, df = 65, p = 0.032). Students placed in the
competitive feedback group who indicated that they preferred
individualistic feedback improved by a mean of 3.09 marks compared to a
mean improvement of only 0.76 marks by competitive preference students.
This result is contrary to the expectations of this experiment--it
had been conjectured that receiving the preferred form of feedback would
lead to significantly greater gains in performance than receiving the
non-preferred form of feedback.
A possible explanation for this could lie in the motivation of
competitive preference students to be the best. In this experiment the
student who scored the best performance on the practice quiz (6)
achieved the result quite early in the data collection process. It may
have been possible that the presence of this "unbeatable
winner" discouraged the competitive preference students in the
competitive group more than the individualistic preference students.
This effect would need to be substantiated with further research.
Key Finding 2: Providing students with their non-preferred form of
feedback had a negative impact on their mathematics ability self-rating.
Receiving the non-preferred form of feedback in this experiment led
to a significant decrease in students' mean self ability rating (t
= -2.327, df = 65, p = 0.023) from the pre-survey to the post-survey.
This can be seen in Figure 11 by noting that individualistic preference
students allocated to the competitive feedback group and competitive
preference students allocated to the individualistic feedback group both
had decreases in their mathematics ability self-rating score out of ten.
There are three other effects contained within this change in
mathematical ability self-rating measure.
Firstly, being allocated to the competitive feedback group resulted
in a significant decrease in mathematical ability self-rating by 0.8 of
a mark as a result of the experiment (t = -2.716, df = 66, p = 0.008).
This does not speak well for providing students with competitive tasks
online.
Secondly, for students who had indicated an individualistic
preference, those allocated to the competitive feedback group had a mean
change in mathematical ability self-rating as a result of the module
that was significantly less than those allocated to either the
individualistic feedback group (t = 2.309, df = 41, p = 0.026) or the
neutral feedback group (t = 2.109, df = 44, p = 0.041).
That is, for those whose preference was for individualistic
feedback but were placed in the competitive feedback group, the quiz had
a more negative impact on their impressions of their own ability than
for those allocated to the individualistic or neutral feedback groups.
If this result was extrapolated to other content areas and tasks,
placing students with an individualistic feedback preference in a
competitive feedback environment may have a significantly detrimental effect on their self perceptions of ability in comparison to placing
them in an individualistic feedback or neutral feedback environment.
Thirdly, the module had a significantly negative impact on the mean
mathematics ability self-rating score of students who indicated a
competitive preference (t = -2.695, df = 125, p = 0.008), irrespective
of the feedback group to which they were allocated. This is depicted in
Figure 11 by all three competitive preference columns having negative
values.
Based on the scores achieved by students, students' open-ended
comments, and teacher observations, many participants found the content
of the module difficult. It may be possible that presenting competitive
(success or ego-driven) students with a difficult task that they do not
master may have a more negative impact on their ability self concept
than on individualistic preference students. Also, the mean improvement
in quiz score of students who indicated a competitive preference was
less than individualistic preference students for all feedback groups;
this may have impacted their mathematical ability self-rating.
Note that the module did not have a negative impact across any
feedback preference or feedback allocation groupings for any other of
the attitudinal variables (effort in mathematics self-rating, ability
for success rating, effort for success rating, enjoyment of Internet
learning rating).
Key Finding 3: Possible gender differences in the number of
practice attempts (7)
Of the top 20 most practising students, 19 were male; this is
significantly different from the population proportion in the trimmed
dataset ([chi square] = 12.334, df = 1, p < 0.001) (8). The
proportion of competitive preference to individualistic preference was
roughly preserved within this "top 20" group (competitive
preference = 15, individualistic preference = 5). Also, these "top
20" students were fairly evenly distributed across feedback
allocation groups (competitive feedback = 8, individualistic feedback =
7, neutral feedback = 5). That is to say, it does not appear that the
feedback preference or allocated feedback group lead to students
practising more often.
The number of practice attempts by the members of the top 20 most
practising students were:
81, 63, 41, 34, 34, 30, 30, 28, 27, 24, 23, 21, 21, 21, 19 (F), 19,
18, 18, 18, 18.
It would appear that this fixated approach towards the practice
quiz extended beyond any effect that may have been caused by the
different school attribute; (5) however this would need to be confirmed
by further sampling.
If this fixated approach to practising the quiz does tend to exist
predominantly in male students, then it has educational ramifications;
it may allow educators to structure tasks in a way that motivates
students to practise.
The mean number of attempts on the practice quiz for males was more
than double that of females ([[mu].sub.m] = 7.94 versus [[mu].sub.f] =
3.67); this is a significant difference (t = 3.755, df = 189, p =
0.0002). The extreme number of attempts by the upper decile of most
practising students (all but one of which was male) was a major
contributor to this difference. If these top 20 most practising students
were excluded from the dataset then the p-value for the difference of
the means became 0.035, which could lead to the result being more easily
disregarded on the basis of so many participants being drawn from single
gender schools (9).
Female students had significantly higher pre-survey ratings for
both attribution of success in mathematics to ability (t = 2.086, df =
189, p = 0.038) and attribution of success to effort (t = 2.280, df =
189, p = 0.024) than male students. As with the other gender results
this observation would need to be substantiated by sampling from within
co-educational schools to ensure that the school from which participants
were drawn did not interfere with measures on these variables.
Other Observations
There were no significant differences between the competitive
preference and individualistic preference groups in the pre-survey
responses or initial quiz performance.
Also, there were no significant differences between females and
males in the pre-survey responses or initial quiz performance, apart
from the higher female attribution of success to ability and effort
means outlined in the Key Finding 3 section. Nor did the module manifest any gender differences in the change in attitudinal variables (10) or
quiz improvement variables. One way that this can be interpreted is that
the extra practice male students performed did not lead to any
significant gains in test score.
FURTHER DISCUSSION
When regression analysis was performed on the number of quiz
attempts versus the improvement in test score from the first attempt to
the final quiz, a highly significant relationship was detected (t =
2.745, df = 190, p = 0.006628). However a regression line with [beta] =
0.07 raises questions as to whether the extra effort of practising was
worth the trouble (11). When the dataset was trimmed to only include
those who practised five or less times a value of [beta] = 0.52 was
obtained, which makes practising seem a much more worthwhile pursuit in
terms of improving mathematics performance on this quadratics equations
task. This draws attention to the fact that the ceiling for improvement
per practise attempt reduces as the number of attempts increases.
Time taken did not significantly change between the first quiz
attempt and the final quiz attempt. It is possible that some students
may have spent more time on the quiz as their ability to solving
quadratics equations improved and they could answer more questions.
However, the time taken on a quiz was considered highly dependent on the
student's environment and for this reason was judged as an
unreliable measure in this experiment.
If male students do have more of an inclination towards fixated
practice on skills tasks similar to the one presented in this experiment
then further research needs to be performed to ascertain the reasons.
This tendency could be related to the gender differences in preference
for computer games. It may be possible that boys prefer tasks that
involve power (12) in some way.
If the latter is true then it is interesting to observe that it was
the temporal dimension to the performance measure, not necessarily the
attraction of outperforming other people that provided the power
dimension to the activity. There is an implicit assumption among some
educators that boys are more disposed to "beating" other
students, but perhaps they are searching for a form of empowerment, not
necessarily at the expense of others. It may be the case that boys are
just as happy to compete against themselves (e.g., using time as a
measure). In the regular classroom it is difficult to manage timed
feedback; in addition, such activities always draw attention to a
"winner." Computers offer a private and easily implemented way
to provide power-based (timed) performance measures that could be used
to facilitate improved learning outcomes for some boys (or to be less
sexist, students who have a preference for such feedback).
This fixated approach towards practice may not be the best use of
these students' time in terms of improving their mathematical
skills. However, educators should acknowledge that such students must
feel as if they are benefiting in some way from the task. For instance,
an adolescent who is finding it difficult to obtain positive identity in
other arenas may benefit greatly from a task where he/she can
continually improve and master a skill, all the while receiving what the
student regards to be a form of positive feedback, monitoring, and
attention. This could relate to reasons that boys have a greater
tendency to play computer games for hours upon hour. Perhaps there is an
innate motivation for young males to participate in activities that
allow them to feel like they are developing their speed, strength, and
power, which could be leveraged to provide approaches to learning that
boys will find more engaging. Further research into this area would be
required to substantiate any of these conjectures.
A possible limitation of this experiment is the weak link between
the criteria used to classify students as either competitive or
individualistic feedback preference and their actual preference.
Students may not fully understand the question or consider their
response deeply enough. Care was taken to highlight the importance of
this question in the pre-survey by presenting it in a different colour
(red) than all other questions and by asking students to "please
consider the following question carefully," but more extensive
questioning or greater explanation may have lead to more accurate
classification. There is also the possibility that students answered
this question according to their experiences with face-to-face learning
and that their preferences for online feedback may be different.
Another possible limitation of this study was the duration of the
differential feedback conditions and the extent to which each student
was subject to these conditions. A more extensive exposure to the
feedback conditions may have detected significant differences between
receiving the preferred and non-preferred form of feedback that this
experiment did not.
The results uncovered by this project should not be taken out of
the context of the task. Solving quadratic equations is a skill-based
task that is suited to repeated, timed practice. Effects of receiving
preferred or nonpreferred, competitive or individualistic feedback for
higher order reflective tasks could obviously be entirely different.
Apart from the areas already mentioned in this report, further
research into the effect of allowing students choice over other forms of
content could uncover valuable results from both a sociological and
educational point of view. For instance, the level of technical
language, speed of presentation, diagrammatic emphasis, motivational
support, and level of higher intensity feedback (such as graphical
feedback or pop-up congratulatory windows with sound) are all
potentially adaptable to student preferences. Additional research into
these areas could identify generic approaches to online content
provision that lead to significant improvements in educational outcomes.
CONCLUSION
The online medium allows educators to tailor feedback systems to
the preferences of learners. In the case of this quadratic equations
learning module, providing students with their non-preferred form of
feedback system had a significantly detrimental impact upon their
mathematical ability self-rating. Teachers need to be aware that
providing students with their non-preferred form of feedback can have
this negative impact, and that taking advantage of the online medium to
provide students with their preferred form of feedback can improve
educational outcomes.
Using the online medium to provide students with a repeated
practice facility led to a significant improvement in quiz scores of
over two marks out of ten. Once again, it is important that educators
are aware that gains in academic performance can be achieved simply by
offering students this type of service.
However, teachers need to take responsibility in educating students
about the possible implications of different types of feedback
structures and types of feedback preferences. In this experiment the
students who indicated a preference for competition ended up having a
significantly lower mathematics ability self-rating as a result of the
module whereas the individualistic preference students did not. Being
allocated to the competitive feedback group led to a significantly lower
mathematics ability self-rating independent of feedback preference.
Students with a competitive feedback preference who were placed in the
competitive feedback group demonstrated no significant improvement in
test score. This sort of information may lead students to reflect upon
their preferences and question their efficacy.
It is possible that providing some students (particularly some
boys) with a power-based task involving speed and accuracy may motivate
them to practice. However, it is important that teachers consider this
information in the broader context of the student's welfare,
helping their pupils become aware that beyond a certain point the time
spent practising a task may not produce as much improvement in a subject
as moving onto the next activity.
The automated and differentiated services that online education can
provide will change the role of the teacher in the future. No longer
will teachers be sole providers of content and feedback. With further
research into the effects of different Web-based educational systems,
teachers can make informed decisions about the best approaches to
utilise with their students and more confidently engage in the task of
helping students understand the implications of these different systems
upon their learning.
Table 1 Feedback Preferences of Participants
Gender
Feedback Preference Female Male Total
Competitive 58 68 126
Individualistic 26 39 65
Total 84 107 191
Table 2 Participants in Each Feedback Preference/allocation Cell
Allocated Feedback Group
Feedback Preference Competitive Individual Neutral Total
Competitive 45 44 37 126
Individualistic 22 19 24 65
Total 67 63 61 191
Average Improvement From First Quiz attempt to Final Quiz
Average Improvement
Competitive Individualistic
Allocated Feedback Group Preference Preference
Competitive 0.76 3.09
Individualistic 2.25 2.68
Neutral 2.27 2.50
Figure 10. Average Improvement From First Quiz Attempt to Final Quiz
Note: Table made from bar graph.
Change in Mathematics Ability Self Rating Score /10
Change
Competitive Individualistic
Allocated Feedback Group Preference Preference
Competitive -0.80 -0.88
Individualistic -0.41 0.47
Neutral -0.35 0.38
Figure 11. Change in Mathematics Ability Self Rating Score /10
Note: Table made from bar graph.
Acknowledgments
This project was conducted in conjunction with the University of
Southern Queensland and the Macquarie ICT Innovations Centre.
The success of this project has been the result of widespread
assistance and support. Thanks to the following people for the
invaluable time and effort that they have contributed.
Assoc. Professor Peter Albion, University of Southern Queensland
Professor Mike Johnson, Director (Macquarie University), Macquarie
ICT Innovations Centre
Jennifer Fergusson, Director (DET), Macquarie ICT Innovations
Centre
Peter Gould, Chief Education Officer--Mathematics, NSW Dept of
Education & Training
Coordinating Teachers for the nine participant schools:
Dr Joan Lawson, Normanhurst Boys' High School
Sarah Hamper, Tara Anglican School
Maureen Breen, MLC School
Marie Lebens, Turramurra High School
Michael Fuller, Killara High School
Bruno Pileggi, Mazenod College
John Tonkin, Marsden College
Ted McGilvray, Ryde Secondary College
Andrew Lloyd, Centralian College
Thanks to the Macquarie ICT Innovations Centre for organising a
Web/MySQL/PHP server upon which to host the site.
Also, many thanks to all other teachers who gave up their precious
classroom time to assist with this project. Their support has made this
research possible.
Notes
(1) Anyone may logon and work through the site. However if you are
an educator who is creating a new user account please add the prefix 'test' to your username. For instance, testMatt. That way your
quiz results won't affect the feedback that students receive.
Alternately, the accounts 'testuserC', 'testuserI'
and 'testuserN' have been set up to show you the different
type of feedback the Competitive, Individualistic and Neither (control)
groups. The password is the same as the username for these three
accounts.
(2) The complete zip file of the "Quadratics-are-fun"
site can be downloaded from http://n2.mpce.mq.edu.au/~mbower/qaf/qaf.zip and is free for educational use. Please note the instructions and
disclaimers in the readme. txt file in the root directory of the site.
(3) Of the 191 students in the final trimmed dataset, 184 responded
that they had Internet access at home.
(4) These schools were Centralian College, Killara High School,
Marsden High School, Mazenod College, MLC School, Normanhurst Boys High
School, Ryde Secondary College, Tara Anglican School, and Turrumurra
High School.
(5) Note that the time taken to complete the quiz decreased from an
average of 276 seconds to 270 seconds, a non-significant result (Z =
-0.231, p = 0.817).
(6) The best performance on the quiz was a score of 10 out of 10 in
13 seconds. The student who achieved this result took 81 attempts at the
quiz overall.
(7) When analysing the differential gender effect of this
experiment the low level of coeducational school students in the trimmed
dataset needs to be considered. Even though 260 coeducational school
students registered on the site, only 19 met the trimming criteria. This
tempers the extent to which conclusions can be drawn regarding gender
differences due to possible interference by the school attribute of each
observation.
(8) This result needs to be considered in light of the fact that
most of the subjects in the trimmed dataset were from single gender
schools.
(9) Another important consideration is that not every teacher would
have placed the same amount of emphasis on the importance of the quiz or
provided the same amount of time in class or between first and final
quiz lessons, which may act to confound the number of attempts variable
between genders. On this basis further research needs to be performed to
substantiate the gender observations made in this experiment.
(10) The attitudinal variables were: ability in mathematics
self-rating, effort in mathematics self-rating, ability for success
rating, effort for success rating, enjoyment of internet learning
rating.
(11) This result implies that the average improvement in test score
was 0.07 for each practice attempt made on the quiz.
(12) "Power" refers to tasks that involve a temporal
dimension, such as speed, not to tasks that involve beating other
students.
References
Becker, W., & Rosen, S. (1992). The learning effect of
assessment and evaluation in high school. Economics of Education Review,
11(2), 107-18.
Cassady, J., Budenz-Anders, J., Pavlechko, G., & Mock, W (2001,
April). The effects of internet-based formative and summative assessment
on test anxiety, perceptions of threat, and achievement. Paper presented
at the annual meeting of the American Educational Research Association,
Seattle, WA.
Covington, M., & Omelich, C. (1984). Task-oriented versus
competitive learning structures: Motivational and performance
consequences. Journal of Educational Psychology, 76(6), 1038-50.
Lam, S., Yim, P., Law, J., & Rebecca, W. (2001, August). The
effects of classroom competition on achievement motivation. Paper
presented at the annual conference of the American Psychological
Association, San Francisco.
Lewis, M., & Cooney, J. (1986, April). Attributional and
performance effects of competitive and individualistic feedback in
computer assisted mathematics instruction. Paper presented at the annual
meeting of the American Educational Research Association, San Francisco.
Sonak, B., Suen, H., Zappe, S., & Hunter, M. (2002, April). The
efforts of a web-based academic record and feedback system on student
achievement at the junior high school level. Paper presented at the
annual meeting of the American Educational Research Association, New
Orleans, LA.
MATT BOWER
Macquarie University
Australia
beetlematt@yahoo.com.au