An empirical study of gender issues in assessments using peer and self evaluations.
Ammons, Janice L. ; Brooks, Charles M.
INTRODUCTION
Traditionally, instructors unilaterally assess students'
performance. However, increasing use of teaching and learning strategies
in which students learn with and from each other may lead to increasing
reliance on peer assessments. To this point, the authors of one study
believe that peer assessments may replace most of the grading
traditionally performed by instructors (Henderson, Rada, and Chen,
1997).
When students conduct peer assessments in collaborative learning
environments, they have an opportunity to discuss and analyze each
other's performance. Oftentimes instructors cannot observe
first-hand the contributions of each group member to a collaborative
project work, but peer and self assessments can provide a means by which
group marks are allocated among the members of a group based on their
relative contributions. However, moving students into the realm of
grading raises questions about the validity of those marks and whether
the gender of the raters and ratees affect the marks given and received
(Ghorpade and Lackritz, 2001; Falchikov and Magin, 1997; Sherrard,
Raafat, and Weaver, 1994).
Where actual differences in performance exist between male and
female students, evaluations may validly capture those differences
because they affect the nature of the contribution that a group member
may make to the collaborative learning experience. Within medical
education, several studies found that women were more skilled at
eliciting concerns from patients and were more empathetic in
consultations (Bean and Kidder, 1982; Marteau et al., 1991, Wasserman et
al., 1984; Weisman and Teitlbaum, 1985).
Other literature suggests that women tend to be more open to other
perspectives and incorporate the perspectives of others with their own,
whereas men tend to focus more on their own perspective (Baxter Malgoda,
1992; Belenky et al., 1986). These gender characteristics might suggest
that cooperative learning projects could be more appealing to female
students as compared to male students. If a preference for this type of
learning leads female students to have more enthusiasm about the
collaborative activities, female students may contribute more to the
group effort on average as compared to the male students.
Other variables may also affect the ratings given to male and
female peers. Research pertaining to gender communication patterns in
higher education suggests that males may receive more attention in
classes than females by dominating classroom discussions (Simonds and
Cooper, 2001; Brazelton, 1998; Kramarae and Treichler, 1990). This could
give rise to a "halo effect" (Cascio, 1998), whereby the
raters knowledge of the ratee's achievement on one dimension
(classroom participation) influences performance ratings in another
area. This could lead to male students receiving higher peer ratings on
average than female students, regardless of the gender of the rater.
Based on these studies, we raise our first research question, are there
gender differences in peer assessments of team members?
Some researchers report evidence that teachers devalued the
performance of students who are the same gender as the instructor
relative to the performance of students who are the opposite gender from
the instructor (O'Neill, 1985). Another study found that women in a
class gave student group presentations higher ratings than the men did
(Sherrard, Raafat, and Weaver 1994). The authors also posit that women
may have higher empathy for their peers than men do and that this could
be a reason for the disparity in the peer assessments.
Where peer assessments affect a significant proportion of the total
marks for a course, we believe it is valuable to conduct analyses to
detect whether gender bias exists in students' peer assessments.
Only one study, to our knowledge, employs a cross-gender/same-gender
analysis of peer assessments by comparing two sets of ratings (one from
same gender and one from opposite gender) on the same students to
determine whether significant differences exist in student peer
assessments (Falchikov and Magin, 1997). Where Falchikov and Magin
(1997) used data from groups that performed an assessment of their group
members just once during the term, our study uses data from the third
assessment performed by the groups during a term. Some literature
suggests that the reliability of scoring improves if students assess
each other at multiple stages rather than simply at the end of a project
(Bacon, Stewart, and Silver, 1999).
Falchikov and Magin (1997) examined two cases. One case was a
first-year science and technology course where students were assigned to
projects on the basis of the topics those students selected. The other
case involved data from a first-year graduate medical course on clinical
and behavioral studies where students were placed in tutorial groups
that lead to the production and presentation of a group report. Neither
case resulted in evidence of gender bias, but further research seems
warranted since peer assessments may be sensitive to the context in
which they are performed. This leads us to our second research question,
is there gender bias in peer assessments?
In a study that examined self assessment, Sherrard, Raafat, and
Weaver (1994) found that self-assessment scores for group presentations
were approximately 4.5% higher than the peer assessment scores of those
same presentations. However, the study did not indicate whether or not
gender differences existed in the self-assessment scores. When student
self assessments are included as factors in determining the allocation
of a group's marks to individual students, educators may also want
to implement checks to identify gender differences in self assessments.
Thus, we pose our third research question, are there gender differences
in self assessment?
When evaluating our three research questions, we looked at both a
global measure of performance and six specific work behaviors. While it
may be difficult for us to know the exact nature of the reasons for
gender differences that may exist in the peer or self assessments
performed by our students, these tests may allow us to identify some of
the attributes that affect the differences, if any, in the overall
performance ratings.
METHODOLOGY
Sample
The data for this study came from the third set of self and peer
assessments completed by students enrolled in a required, introductory,
cross-disciplinary business course. The course was team taught and used
a business simulation game as the primary pedagogical tool to engage
students in making business decisions for their group's company.
Six professors (three male and three female) taught in the course. The
course consisted of three modules: accounting, marketing, and
management. So each student saw three of the course professors during
the semester. Every student saw at least one male professor and at least
one female professor during the first term of the course.
There were 12 sections of the course, and each section had 5
student groups (with 4 to 7 members in each group), resulting in 60
teams or groups. However, only 59 teams completed the self and peer
assessments in the third round of evaluations. In addition to completing
a self-assessment, each student also completed an assessment for each of
their group members. Three hundred thirty students completed this final
set of evaluations resulting in 330 self assessments and 1592 peer
assessments (for a total of 1602 evaluations). Of the 330 respondents,
120 were female and 210 were male. Each group contained both male and
female students.
The course was required for all business majors in the first
semester of their freshman year. Transfer students were generally waived
out of the course. Of the students included in the sample, 89.5 percent
were freshmen, 8.7 percent were sophomores, and 1.9 percent were
juniors. When classified by major, the largest group of students was
business undecided (35.3 percent). Of the remainder, 16.8 percent were
management/entrepreneurship majors, 15.0 percent were
marketing/advertising majors, 8.0 percent were computer information
systems majors, 7.4 percent were accounting majors, 6.5 percent were
finance majors, 6.2 were international business majors, and the
remaining 5.6 percent were other majors.
Data Collection
Each of the three modules contained at least one group project.
Overall, these projects accounted for 31.25% of the course grade. In the
accounting module, each group created a balanced scorecard strategy map
for its firm in the simulation and analyzed the firm's performance
in an oral presentation to the class. In the marketing module, each
group designed a marketing plan for its simulation firm and presented
that plan to the class. In the management module, each group designed a
strategic plan and presented it to the class.
At the end of each module (three different points during the term),
students completed a peer evaluation packet. The packet consisted of a
cover sheet (Exhibit 1) that offered instructions on how to complete the
packet and explained that the evaluations would be anonymously shared
with their group members. The second page of the peer evaluation packet
was an illustration of a completed feedback grid (Exhibit 2). Subsequent
pages in the packet contained blank feedback grids so that the rater
could complete one for each member of the team including
himself/herself. The instructors of the course designed the assessment
criteria based on conversations with students from the prior year about
desirable or undesirable behaviors associated with team members.
Each student completed his/her evaluation packet outside of
classroom hours. Each student placed his/her evaluation packet in a
sealed envelope, wrote his/her name, the course section, and the name of
the team on the outside of the envelope, and gave that envelope to the
module instructor after the completion of the group project and
presentation. After receiving the packet, instructors and a graduate
assistant verified that the correct total number of points (equal to the
number of team members times 100) was distributed among all team
members, that the rating given to an individual on the cover sheet
matched the rating given to that same student on the comment/feedback
grid, and then re-assembled these evaluations so that a student would
receive his/her cover sheet and the feedback grids that each of his/her
teammates completed to evaluate him/her as well as the grid that he/she
completed to rate himself/herself. The average of those scores for that
individual appeared on the bottom of the cover sheet. This average was
used as a weight to determine the individual's grade on the group
work. If a group earned a 90 on its project and a particular student in
that group received an evaluation from peers and self of 90 points, then
that individual received an 81 as a grade on the project. In some cases,
students received grades in excess of 100 points.
ANALYSIS
Table 1 offers a matrix with the average ratings received in peer
assessments by gender of the rater. The top left group shows 100.36 as
the average rating received by female students from female raters (F X
Fr). In the next row, 100.49 is the average rating received by female
students from male raters (F X Mr). The bottom left score of 100.44 is
the average rating given to female students by all peer team members
regardless of gender. The mean rating received by females from other
females was not significantly different from the mean rating they
received from males (t=-0.222, p=.825). The middle column shows that
male students received a mean rating of 98.41 from females and 99.05
from males with a mean score of 98.83 from all peers. Again, the mean
rating received by males from females was not statistically significant
from the mean rating males received from males (t=-0.888, p=.359).
The gender difference in the average performance ratings of 1.61
points favoring female students, when comparing XF to XM, was
statistically significant (t=3.577, p=.000). However, examination of the
two diagonals of Table 1 reveals a lack of gender bias in the rating
behavior of the students. The average rating received by students from
raters who were of the opposite gender was 99.44. The average rating
received by students from raters who were of the same gender was 99.38.
The point difference of 0.06 was not statistically significant
(t=-0.140, p=.889).
The evaluation forms also prompted raters to consider a list of
individual work behaviors, such as promptly attending meetings,
delivering work in complete fashion, meeting deadlines, volunteering for
tasks, pulling fair share, and demonstrating a positive and enthusiastic
attitude. Raters marked each of these criteria between 1 (never) and 5
(always) and some provided open-ended feedback on each dimension.
Although scores on these individual performance criteria did not enter
into the grading process, raters may have considered these marks in
determining the overall performance ratings given to their team members.
Table 2 indicates that gender differences existed along each of the
individual evaluation criteria that appeared on the evaluation forms.
Females received higher evaluations than males. When grouped on whether
or not the rater was the same gender as the person being evaluated, the
ratings on these individual evaluation criteria are not significantly
different. Thus, no gender bias was evident.
Since the evaluation forms also offered raters an opportunity to
provide open-ended comments on each of the six individual criteria (that
are listed in the left column of Table 2), we tested for gender
differences in the nature of that feedback (positive, negative, or
mixed) and the frequency of that feedback. If the open-ended remarks by
the rater were "clearly positive," the authors coded the
category as positive. If the remarks were "clearly negative,"
the authors coded the category as "negative." If the remarks
included both positive and negative feedback or included feedback that
was not clearly positive or negative, the authors coded the category as
"mixed." Since the variable in Table 3 is a frequency count
across six categories, the variable can range from zero to six. The
first row of Table 3 shows that females received positive remarks in an
average of 1.796 of the six categories, whereas males received positive
feedback in an average of 1.523 of these six categories. The difference,
favoring female students, is statistically significant (t=2.649,
p=.008). However, differences in the frequency of negative feedback
(t=0.885, p=.376) or mixed feedback (t=-0.689, p=.491) were not
statistically significant. The bottom row of Table 3 ignores the nature
of the feedback and shows that females received feedback across a higher
number of categories than male students evaluated by their peers. This
difference was statistically different (t=2.335, p=.020).
When the data was grouped by whether the gender of the rater is the
same or different than the gender of the evaluator, the results show
that more mixed feedback is received when the rater is of the same
gender. The mean frequency of mixed feedback was 0.120 for the same
gender and 0.078 for the opposite gender (t=2.181 and p=.029). However,
there was no evidence of gender bias in tests examining the frequency of
positive, negative, or total feedback.
When the data was grouped by the gender of the rater, the results
showed that females gave more total feedback (mean of 2.070 for female
raters and 1.838 for male raters; t=2.152, p=.032) and females also gave
more positive feedback than male raters (mean of 1.757 for female raters
and 1.544 for male raters; t=2.103; p=.036). No statistical differences
were found for mixed (t=-0.705, p=.481) or negative feedback (t=1.004,
p=.316) by gender of the rater. When the ratings on individual
performance criteria are grouped by the gender of the rater, the only
statistically significant difference was for "pulled fair share
with regard to overall workload." Females are less generous than
males when numerically evaluating the extent to which their peers are
doing their fair share (mean of 4.73 for female raters and 4.79 for male
raters; t=-2.036, p=.042).
The overall performance ratings given by students when rating
themselves (self assessment) ranged from 90 to 150. If a student wished
to indicate that each person on the team contributed equally to the
performance of the team, then a student would mark a 100 for each team
member. Thus, a 90 indicates that the individual recognized that he/she
contributed less than his/her "fair share" to the team's
performance and a 150 indicates that the individual contributed far
beyond what others did in the group. The mean self assessment score was
103.52. Since this is greater than 100, it indicates that individuals
tended to think that they contributed a bit more than an equal share to
the team. Table 4 presents the means and t-tests of the self assessments
by gender. The mean rating that female students (103.80) gave themselves
was not significantly different from the mean rating that male students
(103.37) gave themselves (t=0.480, p=.632). Similarly, there were no
significant differences between genders for self assessments on the
numerical ratings of any of the six individual criteria that appeared on
the evaluation form.
DISCUSSION AND CONCLUSIONS
Most students tend to rate themselves as doing slightly more than
an equal share of the work. One is reminded of Garrison Keillor's
description of Lake Wobegon, where "all the children are above
average." However, since we required that each student allocate
marks among team members and his- or herself so that the sum of the
allocated marks equaled the team size times 100, students are unable to
rate all peers above average.
Gender differences are apparent in our data for peer assessments.
On average, females scored higher than males, regardless of the gender
of the person performing the evaluation. This suggests that there are
actual differences in performance between male and female students when
working on group projects. This performance difference was captured not
only in the overall performance rating received, but also in the ratings
for the specific performance criteria of promptly attending meetings,
delivering work in complete fashion, meeting deadlines, volunteering for
tasks, pulling fair share, and demonstrating a positive and enthusiastic
attitude.
Since no statistically significant difference was found between the
average numerical rating where the rater was the same gender as the
person being evaluated and the average rating where the rater was of the
opposite gender from the person being evaluated, there was no evidence
of gender bias. This is reassuring in light of the prior research that
reported evidence that teachers devalued the performance of students who
are the same gender as the instructor relative to the performance of
students who are the opposite gender from the instructor (O'Neill,
1985). We did not observe this among the students in this study.
The lack of gender bias in our data could be due in part to the way
the groups and the evaluation process were managed. The composition of
the teams did not change over the semester (except where students may
have dropped out of the course), and the data was drawn from the third
set of evaluations. By this point, students were more familiar with
group expectations, characteristics of team members and their
contributions, as well as the evaluation process itself (since the
evaluation form itself never changed during the semester). Some have
suggested that reliability in scoring increases when evaluations take
place at multiple stages rather than simply at the end of the semester.
Previous research has found that free-riding (social loafing) is reduced
in multiple stage evaluations (Brooks and Ammons, 2003). Prior empirical
studies on peer assessment examined gender only where the evaluation was
the first (and last) of the semester. This study is the first to examine
gender issues in self and peer assessments where the evaluation data is
from a subsequent stage. So it would have been interesting to test
whether any gender bias was evident in the first set of evaluations.
This is also the first empirical study of gender issues in peer
assessments to examine not only an overall average rating received by
peers, but also individual ratings on specific criteria, the frequency
of qualitative feedback, and the nature (positive, negative, or mixed)
of that feedback. While we find consistency between the performance
difference by gender in the overall ratings and the ratings for six
specific performance criteria, we also find that females tended to both
give and receive more open-ended feedback than male students, and this
feedback tended to be positive. The list of performance criteria and the
opportunity to evaluate a group member both numerically and
descriptively on those criteria may have helped students determine a
fair overall allocation of the team's marks. Albert Einstein
suggested, "Not everything that counts can be counted, and not
everything that can be counted counts." By not forcing any
formulaic relationship between these specific criteria and the overall
rating, students were also allowed to consider other relevant factors
that were not listed and to weight the factors in any way they deemed
appropriate.
REFERENCES
Bacon, D. R., K. A. Stewart, and W. S. Silver. (1999). Lessons from
the best and worst student team experiences: How a teacher can make a
difference. Journal of Management Education, 23(5): 467-488.
Baxter Magolda, M. B. (1992). Knowing and Reasoning in College:
Gender-related Patterns in Students' Intellectual Development. San
Francisco: Jossey Bass.
Bean, G. and L. Kidder. (1982). Helping and achieving: compatible
or competing goals for men and women in medical school? Social Science
and Medicine, 16: 1377-1381.
Belenky, M. F., B. M. Clinchy, N. R. Goldberger and J. M. Tarule.
(1986). Women's Ways of Knowing: The Development of Self, Voice,
and Mind. New York: Basic Books.
Brazelton, J. K. (1998). Implications for women in accounting: Some
preliminary evidence regarding gender communication. Issues in
Accounting Education, 13(3): 514-530.
Brooks, C. M., and J. L. Ammons, (2003). "Free Riding in Group
Projects and the Effects of Timing, Frequency, and Specificity of
Criteria in Peer Assessments," Journal of Education for Business,
78(5): 268-272.
Cascio, W. F. (1998). Applied Psychology in Human Resource
Management. Upper Saddle River, NJ: Prentice Hall.
Falchikov, N. and D. Magin. (1997). Detecting gender bias in peer
marking of students' group process work. Assessment and Evaluation
in Higher Education, 22(4): 385-397.
Ghorpade, J. and J. R. Lackritz. (2001). Peer evaluation in the
classroom: A check for sex and race/ethnicity effects. Journal of
Education for Business, 76(5): 274-382.
Henderson, T., R. Rada, and C. Chen. (1997). Quality management of
student-student evaluations. Journal of Educational Computing Research,
17(3): 199-213.
Kramarae, C. and P. A. Treichler. (1990). Power relationships in
the classroom. In S. L. Gabriel and I. Smithson (Eds.), Gender in the
Classroom: Power and Pedagogy (pp. 41-59). Urbana: University of
Illinois Press.
Marteau, T., C. Humphrey, G. Matton, J. Kidd, M. Lloyd, and J.
Horder. (1991). Factors influencing the communication skills of
first-year clinical medical students. Medical Education, 25: 127-134.
O'Neill, G. (1985). Self, teacher and faculty assessments of
student teaching performance: A second scenario. The Alberta Journal of
Educational Research, 31(2): 88-98.
Sherrard, W. R., F. Raafat, and R. R. Weaver. (1994). An empirical
study of peer bias in evaluations: Students rating students. Journal of
Education for Business, 70(1): 43.
Simonds, C. J. and P. J. Cooper. (2001). Communication and gender
in the classroom. In L. P. Arliss and D. J. Borisoff (Eds.), Women and
Men Communicating: Challenges and Changes, 2nd ed. (Chap. 13). Waveland
Press, Inc.
Wasserman, R., T. Inui, R. Barriatua, W. Carter, and P. Lippincott.
(1984). Paediatric clinician's support for parents makes a
difference: an outcome-based analysis of clinician-parent interaction.
Paediatrics, 74(6): 1047-1053.
Weisman, C. and M. Tettelbaum. (1985). Physician gender and the
physician-patient relationship: recent evidence and relevant questions.
Social Science and Medicine, 20(11): 1119-1127.
Janice L. Ammons, Quinnipiac University
Charles M. Brooks, Quinnipiac University
Table 1. Comparison of Peer Assessment Ratings of Overall
Performance by Gender.
F X Fr = 100.36 M X Fr = 98.41 Xoppgender = 99.44
(n=225) (n= 347) (n=690)
F X Mr = 100.49 M X Mr = 99.05 Xsame_gender =99.38 (b)
(n=343) (n=677) (n=902)
XF = 100.44 XM = 98.83 (a)
(n=568) (n=1024)
(a) T-test for mean difference of 1.61 between XF and XM is 3.577
(p=.000).
(b) T-test for mean difference of 0.06 between Xopp_gender and
Xsame_gender is -0.140 (p=.889).
F = female student evaluated; M = male student evaluated; Fr =
female rater; Mr = male rater; XF = average rating given to female
students by any peer rater; XM = average rating given to male
students by any peer rater; Xopp_gender = average evaluation
received from a student by a peer rater of the opposite gender;
Xsame_gender = average evaluation received from a student by a peer
rater of the same gender.
Table 2. Comparison of Individual Evaluation Criteria Grouped by
Gender of the Person Evaluated.
Evaluation Criteria Gender of N Mean Std.
Person Deviation
Evaluated
Prompt in attendance at Female 564 4.78 0.562
team meetings Male 1006 4.67 0.784
Delivered agreed upon parts Female 564 4.88 0.420
of project in a complete Male 1006 4.80 0.604
fashion
Met deadlines Female 563 4.92 0.384
Male 1007 4.86 0.518
Volunteered appropriately Female 564 4.79 0.539
during team meetings when Male 1006 4.73 0.680
tasks need to be
accomplished
Pulled fair share with Female 563 4.83 0.506
regard to overall workload Male 1007 4.72 0.704
Showed enthusiastic and Female 563 4.83 0.522
positive attitude about team Male 1003 4.76 0.646
activities and fellow team
members
Evaluation Criteria Gender of t p-value
Person
Evaluated
Prompt in attendance at Female 3.411 .001
team meetings Male
Delivered agreed upon parts Female 2.797 .005
of project in a complete Male
fashion
Met deadlines Female 2.501 .012
Male
Volunteered appropriately Female 2.023 .043
during team meetings when Male
tasks need to be
accomplished
Pulled fair share with Female 3.430 .001
regard to overall workload Male
Showed enthusiastic and Female 2.452 .014
positive attitude about team Male
activities and fellow team
members
Table 3: Frequency and Nature of Open-ended Feedback on Six
Performance Criteria.
Gender of N Mean Std.
Person Deviation
Evaluated
# categories with Female 568 1.796 2.014
positive feedback Male 1021 1.523 1.880
# categories with Female 568 0.180 0.617
negative feedback Male 1021 0.210 0.665
# categories with Female 568 0.111 0.436
mixed feedback Male 1021 0.097 0.356
Total # categories Female 568 2.086 2.147
with feedback Male 1021 1.830 2.013
Gender of t p-value
Person
Evaluated
# categories with Female 2.649 .008
positive feedback Male
# categories with Female -0.885 .376
negative feedback Male
# categories with Female -0.689 .491
mixed feedback Male
Total # categories Female 2.335 .020
with feedback Male
Table 4. Comparison of Self Assessment Ratings Gender.
Evaluation Criteria Gender n Mean Std.
Deviation
Overall Evaluation Female 120 103.80 7.765
Male 210 103.37 7.968
Prompt in attendance Female 105 4.86 0.352
at team meetings Male 191 4.82 0.439
Delivered agreed upon Female 105 4.94 0.233
parts of project in a Male 191 4.95 0.212
complete fashion
Met deadlines Female 105 4.97 0.167
Male 191 4.95 0.246
Volunteered Female 105 4.91 0.281
appropriately during Male 190 4.91 0.294
team meetings when
tasks need to be
accomplished
Pulled fair share with Female 105 4.91 0.281
regard to overall Male 189 4.93 0.274
workload
Evaluation Criteria Gender t p-value
Overall Evaluation Female 0.480 .632
Male
Prompt in attendance Female 0.811 .418
at team meetings Male
Delivered agreed upon Female -0.375 .708
parts of project in a Male
complete fashion
Met deadlines Female 0.885 .377
Male
Volunteered Female 0.256 .798
appropriately during Male
team meetings when
tasks need to be
accomplished
Pulled fair share with Female -0.503 .615
regard to overall Male
workload
Exhibit 1. Cover sheet for team member evaluation packet.
Name:
Group Name:
Section:
Date:
At three different times during the semester (near the end of each
module), you will evaluate each of the members of your team. Fill
in an evaluation sheet for each of your team members. All responses
should be typed and then printed out.
Your evaluation and the evaluations from other members of your
group will be returned to the person that is being evaluated. In
order for these evaluations to be meaningful, you need to provide
your team members with constructive feedback. Let your team members
know what they are doing well and what they are not doing well.
Also, let them know how they can improve their performance. When
the forms are returned to your team members, they will not see your
name associated with your comments on their performance.
Place your completed Team Member Evaluation Packet in a sealed
envelope with your name, your group name, and your SB 101 section
letter indicated on the outside of the envelope. The envelope
should be turned in on the last day of the module.
The points that you award each team member will be used in
determining that team members grade on that module's group project.
Team members that do not do their fair share of the work may lose
points on group work, and team members that do more than their fair
share of the work may get extra points added to their group work.
On the overall evaluation, you will be "paying" each of your team
members with points. You have will have 100 points for each member
of your team. For example, if you have 6 members on your team, you
have 600 points to allocate. If everyone contributed equally and
did his/her fair share of the work, then each member of the team
should receive 100 points. If someone did more than his/her fair
share of the work, that person should receive more than 100 points.
Likewise if someone did less than his/her fair share of the work,
that person should receive less than 100 points.
After you have completed the individual evaluation forms (including
a page for yourself), complete the Summary Table below. Type in
your name and your team members' names. Indicate how many points
each member of your team should receive. The points in this summary
table should match the "pay" you indicated at the bottom of each
person's individual page.
Add up the points that you have allocated across the columns of the
summary table and put this number in the last column. This number
should equal 500 points if you have 5 team members or 600 points if
you have 6 team members.
Summary Table (Complete this Table)
Group (Insert (Insert (Insert (Insert
Members your name group group group
Names here) member's member's member's
name here) name here) name here)
Allotment of
Team Points
Group (Insert (Insert TOTAL
Members group group TEAM
Names member's member's POINTS
name here) name here)
Allotment of
Team Points
If you do not feel that your group evaluation average accurately
reflects the work that you completed on your group project, you
should set up a meeting and talk with your team members. After
talking with your team members, if you still do not feel that you
have been evaluated fairly, you and your team should schedule a
meeting with that module's professor.
Exhibit 2. Sample sheet in team member evaluation packet.
Team Member's Name: Sample Team Member
Evaluation Criteria: For each Provide comments and
criteria, constructive feedback
rate this in the spaces provided
team member below:
on a scale
of 1 (Never)
to 5 (Always)
Prompt in attendance at 5
team meetings.
Delivered agreed-upon 5
parts of project in a
complete fashion
Met deadlines. 3 Sample team member was
late completing the
PowerPoint presentation.
He was supposed to
complete it on Wednesday
afternoon, but he didn't
finish until late
Thursday night.
Volunteered appropriately 4 Sample Team Member was
during team meetings when always at the meeting,
tasks needed to be but he was not always
accomplished. prepared for the meetings
and hardly ever had
anything to contribute.
Sometimes, he just sat
there.
Pulled fair share with 5
regard to overall
workload.
Showed enthusiastic and 5 Sample Team Member was
positive attitude about always enthusiastic about
team activities and how our company was doing
fellow team members. financially.
Overall Evaluation
Based on the points Overall Feedback (this is mandatory):
available for the team, I
would "pay" this person Sample Team Member was really motivated
85 for his/her share of at first, but at the end of the module,
the team points. he let the team down when he was late
with the PowerPoint. When he missed his
deadline, it meant that the entire team
had to stay up all night rehearsing our
presentation. Once Sample Team Member
knew he was having trouble with his part
of the assignment, he should have asked
for help.