Quality of student experiences at university: a Rasch measurement model analysis.
Waugh, Russell F.
The Community College Student Experiences Questionnaire from the
United States of America (Friedlander, Pace, & Lehman, 1990) was
revised and rewritten for Australian university students. The Australian
Quality of Student Experiences Scale comprises 60 items relating to student expectations and, in direct correspondence, 60 items relating to
their experiences. The items are based on a model involving academic,
personal and group experiences for eight areas: My Course, The Library,
My Lecturers, Student Acquaintances, The Arts, Writing, The Sciences and
Vocations. The convenience sample was 305 first year students and the
data were analysed with a Rasch measurement model. Fifty-eight items did
not fit the model and were discarded. Most of these items came from the
sub-scales: The Arts, The Sciences and Writing. The final scale of 62
items had excellent psychometric properties. Expectations are easier
than experiences, as conceptualised, and the conceptual design of the
scale is confirmed.
Introduction
In the United States of America, but not in Australia, there is a
great deal of research on college, community college and university
experiences that maximise student impact. This research has been
reviewed by Pascarella and Terenzini (1991) in their book, How college
affects students. A major finding is as follows:
One of the most inescapable and unequivocal conclusions we can make is that
the impact of college is largely determined by the individual's quality of
effort and level of involvement in both academic and nonacademic
activities. This is not particularly surprising; indeed, the positive
effects of both the quality and extent of involvement have been repeatedly
stressed. (p. 610)
This means that students who deliberately aim to take part in the
varied activities and life of a university or college are more likely to
show academic growth, personal growth, satisfaction with their studies
and their institution, and continue with their studies. Some examples of
the experiences studied include participation in class discussions, use
of the library, interaction with lecturers, interaction with students of
various religions and groups, discussions and attendance at functions or
activities of the arts (theatre, paintings, sculpture, music recitals,
dance, musicals, plays), involvement with writing and literature, using
computers, discussions on current and topical science issues, and
learning vocational tasks and skills. Students can take part in all
these activities which can contribute to a rich and fulfilling
experience, if they make the effort.
If this evidence were applied to universities in Australia, there
are, at least, two main implications. First, students who show the most
academic and personal growth, who are most satisfied with their
university and who have a long association with studies at their
university, are the ones who put the greatest amounts of effort and time
into their university life and studies. Second, individual universities
should investigate the links between the university environment and the
quality of effort exerted by their students. If they find that students
are not making the effort to take part in the varied activities
available, then they could develop strategies to improve the situation.
An ordinal level scale to measure quality of effort was developed
by Pace (1979a, 1979b, 1984, 1992) and has been used extensively in the
USA. It consists of 61 items over 8 sub-scales involving My Course, The
Library, Faculty Staff, Student Acquaintances, The Arts, Writing, The
Sciences and Vocations. Its psychometric properties have been studied by
Lehman (1991, 1992) and Ethington and Polizzi (1996), using traditional
measurement techniques with USA data. The latter stated:
The Quality of Effort measures can be used to make valid and reliable
inferences regarding students' efforts and involvement, and the validity of
the inferences is not conditional on whether the students are in vocational
or transfer programs, attending full-time or part-time, or of majority or
minority ethnic status. (p. 71)
A scale focusing on quality of student experiences does not seem to
have been applied to Australian universities in recent times. Most
universities in Australia use the Course Experience Questionnaire (CEQ)
which measures student satisfaction with courses and teaching (Johnson,
1997; Waugh, 1998, 1999b; Wilson, Lizzio, & Ramsden, 1997). This
questionnaire has five sub-scales (25 items in a Likert format): Good
Teaching, Clear Goals and Standards, Appropriate Assessment, Appropriate
Workload and Generic Skills, and a single item on Overall Satisfaction
with the course. It is based on a model of teaching and is not intended
to cover all the other experiences that universities offer.
In 1999, universities first used the Postgraduate Research Experience Questionnaire (PREQ) to investigate postgraduate experiences
that were expected to be different from undergraduate experiences. This
questionnaire has six sub-scales (28 items in Likert format):
Supervision, Thesis Examination, Skills Development, Goals and
Standards, Intellectual Climate, Infrastructure and a single item on
Overall Satisfaction.
Both the CEQ and PREQ are contentious policy issues in higher
education in Australia for three reasons. First, they are used to
develop performance indicators (or will be for the PREQ) with
comparisons between universities. Second, there is the threat of funding
being granted or reduced as a result of performance indicators. Third,
there is some disagreement about the validity of the CEQ to measure
graduate course experiences four months (or more) after graduation and
about whether it samples all the main aspects for all universities.
There is even more disagreement about the validity of the PREQ,
especially since many supervisors have an insufficient number of
postgraduates to satisfy a reliability criterion (and they cannot be
identified for ethical reasons) and the intended unit of analysis is
broad `fields of study' within a university (rather than
supervisors).
Other recent, related Australian studies seem to have focused on
the diversity in experiences of university undergraduates (McInnes,
James, & McNaught, 1995), overseas students and their problems and a
comparison with Australian student problems (see for example, Burke,
1986a, 1986b, 1988a, 1988b, 1990; Mullins, Quintrell, & Hancock,
1995; Quintrell, 1990, 1991, 1992). The present study, therefore, has
the potential to investigate the psychometric characteristics of an
Australian Quality of Student Experiences Scale for university students
and help Australian universities to improve the experiences of their
students.
Problems with the USA Quality of Effort Scale
Five aspects of the American Quality of Effort Scale are called
into question. First, if students are asked to respond to items in a 1
to 4 format (from none to very often) and apply this format across all
units (subjects) and experiences, then there are problems with
interpretation. When students interact with many lecturers and other
students, and study many different units (subjects), with different
degrees of effort, it is difficult for some of them to answer globally
and consistently, as the same amount of time may be interpreted differently by different students. There is a consequent measurement
problem for the researcher where the interpretation is unclear. What is
needed is an ordered response format that can be applied consistently
and logically across all units (subjects) by all students.
Secondly, the validity of the Quality of Effort Scale is suspect
because a proper scale in which the items are ordered from easy to hard
has not been constructed. In addition, no attempt has been made to link,
on the same scale, the student measures to the item difficulties.
Thirdly, the scale only measures student descriptions of their efforts
(which influence their experiences during the course). It is likely that
their efforts and experiences will be influenced by their expectations
(attitudes). Hence what they expect to experience, as well as what they
do experience, ought to be measured at the same time and calibrated on
the same scale. That is, many students will have been told what to
expect at university (perhaps by teachers, parents and peers) and then
by lecturers. Many students will expect university to be a place where a
variety of ideas and topics are discussed and debated, not just in
relation to a narrow chosen course of study, but in relation to new
discoveries, new techniques and the important scientific and artistic
`issues of the day'. Hence the expectations of many students will
influence their experiences, and be related to their satisfaction with
university (see Conceptual framework below).
Fourthly, the main analysis of the Quality of Effort Scale has only
been performed with traditional measurement techniques and ordinal level
scales. That is, the sub-scales are formed from items found to load on
various factors through factor analysis and combined to form the final
scale. Student scores are formed from adding the scores on all items (an
ordinal level scale) and no check is made to ensure that all items are
answered logically as part of a scale (measuring from low to high).
Fifthly, its conceptual structure is untested at universities in
Australia. Modern measurement programs are now available to create
interval level measures in which item difficulties and student measures
can be calibrated on the same scale and so test the conceptual structure
of the Quality of Student Experiences Scale (see Andrich, 1988a, 1988b;
Andrich, Sheridan, Lyne, & Luo, 1998; Rasch, 1960/1980; Waugh, 1998,
1999a, 1999b).
Changes made to the American Quality of Effort Scale
Changes were made to the American version of the Quality of Effort
Scale to overcome the five problems referred to above. The original
eight sub-scales (My Course, The Library, My Lecturers, Student
Acquaintances, The Arts, Writing, The Sciences and Vocations) were
retained in the new design. The original 61 items were revised and
rewritten so as to be applicable to Australia and written in a positive
format. There are now 60 items relating to expectations and, in direct
correspondence, 60 items relating to course experiences (see Appendix
I). The items were ordered under their respective sub-scale headings
which make it clear to the students what sub-scale is being measured.
The response format was then changed in two ways. First, two columns
were added for responses, one for expectations and another for
experiences. Second, the response categories were changed to an ordered
format to provide a better measurement structure:
In relation to all the units or nearly all the units (subjects) studied, in
relation to most units (subjects) studied, in relation to some units
(subjects) studied and in relation to no units or only one unit (subject)
studied.
The data were analysed with a Rasch measurement model program to
create an interval level scale and to investigate the conceptual
structure of the scale (Andrich, Sheridan, Lyne, & Luo, 1998).
There are a number of items that may appear to be inappropriate for
use in Australia and could have been excluded on simple conceptual
grounds, For example, `I expected to have to explain an experimental
procedure to another university student' (sciences aspect, items
95-96) and `I expected to talk about art (painting, sculpture, artists
and architecture) with other students at university' (arts aspect,
items 61-62). It is argued that items like these could not be excluded
on simple conceptual grounds for two reasons. The first is that we
genuinely do expect our best university students to be able to converse on many topics in the arts and sciences, no matter what course they are
studying, and many can. This is part of the university environment which
includes a wide breadth of learning, knowledge and experience. The
second is a measurement issue. We know that many students will answer in
none or only one unit and the scale needs to discriminate these students
from the best students. Hence, it is appropriate to include these items
to check that they fit the measurement model and that they discriminate
between students of differing quality of experiences.
The Australian Quality of Student Experiences Scale places
expectations and experiences on the same interval level scale so that a
model of expectations and experiences can be constructed from a basis of
evidence. Expectations (attitudes) that fit the Rasch model are expected
to fall at an easier position on the scale than their corresponding
experiences that fit the model. This is done so that expectations and
experiences can be compared accurately. It stands in contrast to the
usual procedure which is to construct a set of items for expectations
and compare answers on the same set of items for experiences without any
statistical or measurement link at the interval level (an invalid procedure).
The new scale uses the terms `easy/easier' and
`hard/harder' for items relating to expectations and experiences.
This may appear, at first reading, to be out of place. Whereas it is
common to use easy/difficult to describe achievement items in, for
example, a science test, it is not common to use easy/difficult for
items relating to expectations and experiences. Nevertheless this is
what is used here and the scale literally indicates that some
expectations are easier to hold than their corresponding experiences.
Limitations
There are three main limitations to this study: acceptance of the
Rasch model, acceptance of the measurement of expectations and
experiences at the same time, and a perceived decrease in validity at
the expense of an increase in unidimensionality. With regard to the
first limitation, not all measurement researchers accept the Rasch model
as valid (see Divgi, 1986; Goldstein, 1980; Traub, 1983). A question
arises as to whether the researcher should choose a model that fits the
data or use the requirements of measurement to model the data (Andrich,
1989). The Rasch model uses the latter approach. It requires the
researcher to define a continuum (from less to more), use a statistical
measurement model to check on the consistency of the person measures and
the item difficulties, and use a scale in which the property of
additivity for item difficulties is valid and the scale values of the
statements are not affected by the opinions of people who help to
construct it. The former (traditional) approach means that a more
complex model is needed to fit the data by increasing the number of
parameters to allow for variation in item discrimination and for
guessing, as two examples. In that case, the model would then have two
person parameters and two item parameters, as a minimum, to model the
data.
With regard to the second limitation, the study assumes that
students who are surveyed in September and October are able to state
reliably their expectations as these were in March. On the surface, this
may be questionable and there is evidence that students retrospective recollections can be biased by their implicit theories about personal
change (Conway & Ross, 1984; Ross, 1989). It may be that
expectations should have been listed first for all items so that all
experience items are answered later. However the questionnaire was
tested with 12 students individually and they were interviewed
afterwards. The students said that they were clearly able to separate
their expectations from their experiences and, it should be noted, it is
expectations at the time of measurement that are related to experiences.
With regard to the third limitation, the Rasch model will reject items
that do not fit the model (thus increasing its unidimensionality) but,
because there will then be a different number of expectation and
experience items, it may be claimed that there is a loss of validity.
The counter claim is that there is an increase in validity and
unidimensionality. The approach taken in this study is to use only items
that contribute to a checkable interval level scale where both
expectations and experiences are calibrated on the same scale. The more
usual approach is to have a set of expectation (attitude) items and then
compare the answers on the same set of items for experiences. This
approach is not checking that the data fit a proper measurement scale;
it is not checking that expectations and experiences are calibrated on
the same scale and hence comparisons of expectations and experiences are
then called into question.
Conceptual framework
It is assumed that there is an underlying trait that could be
called Quality of Student Experiences at University. This trait would be
exhibited as an attitude (expectation) at the beginning of the course
and be modified by experiences during the course. The trait is related
to the academic, personal and group experiences for eight aspects
associated with student efforts to participate in the life of a
university: My Course, The Library, My Lecturers, Student Acquaintances,
The Arts, Writing, The Sciences and Vocations. Thus Quality of Student
Experiences is conceptualised, in part, as an expectation derived from
eight aspects of university life and, in part, as an experience during
university life.
It is conceptualised that, although students will have high
expectations for most of the items in the eight structures for their
lives at universities, their experiences will be of a lower standard.
That is, they will find most of the items easier in the expectation mode
and more difficult in the experience mode. For example, it is
conceptualised that students will expect teaching staff to compare and
contrast different points of view on many topics (an easy item), but
when they come to university, they find that many topics are presented
with only one point of view (a harder item). Similarly students will
expect library staff to spend time helping them find articles and
material on various topics (easy item), but when they come to university
they find that many librarians have only a limited time to spend with
each student (hard item). It is theorised that this pattern of easy
expectation items, which are harder in experience, will occur for most
items and most students, provided the items fit the model and can be
placed on the scale. This is in line with the theory that attitudes
(expectations) influence behaviour (experiences) (see Ajzen, 1989;
Fishbein & Ajzen, 1975; Waugh, 1998, 1999a, 1999b).
It is conceptualised that students need time in a university
environment in order to grow and develop in their knowledge, attitudes
and understandings and, ultimately, to succeed. Students who make the
effort to take part in a variety of experiences that universities offer
are more likely to develop a breadth of knowledge, understand varying
attitudes and points of view, and bring that knowledge and understanding
to solving problems and achieving academic success. As they improve
their knowledge and understanding, they will achieve at a higher level.
That is, involvement over time in a breadth of experiences offered at
university is an indicator of student effort, and involvement is a
measure of experience that at least partially influences academic
success. As student knowledge, understanding and success in academic
work grow, students are expected to gain greater acceptance among their
peers and others, greater self-confidence and greater satisfaction with
university.
It is expected that it will require more effort to be involved in
some activities and experiences than in others which will, in turn, lead
to greater growth, development and understanding. For example, students
need to make the effort to compare and contrast a variety of points of
view about major issues in a particular course, including an evaluation
of the force of particular strengths and weaknesses, rather than just
summarise the major points. They need to be able to think logically and
apply that logic in a variety of areas. They are more likely to be able
to compare and contrast various points and think logically, if they make
the effort to be involved in a variety of experiences and if they
develop an expectation that this is necessary.
This leads to the view that expectations are related to
experiences. Students who expect to take part in the variety of
experiences offered at university are more likely than others to make
the effort to be involved in a wide variety of experiences and to bring
those experiences to bear, where appropriate, in their particular field
of study. It is, therefore, expected that high-achieving students (like
Rhodes Scholars, major prize winners and the best students in university
courses) expect to be involved in a variety of areas and display their
talent in a variety of areas, not just in one particular subject.
Aims
The present study had three aims. The first was to create an
interval level scale for the Quality of Student Experiences Scale. The
second was to analyse its psychometric properties by using a modern
measurement model, the Extended Logistic Model of Rasch (Andrich, 1988a,
1988b; Rasch, 1980) using a modern computer program (Andrich, Sheridan,
Lyne, & Luo, 1998). The third was to investigate the conceptual
design of the Scale and hence contribute to a model of student
experiences at university.
Sample and administration
The convenience sample consisted of 305 first year students from an
Australian university. There are 74 (24.3%) studying in Early Childhood
Education, 67 (22.0%) in Business Management, 46 (15.1%) in
Biomechanics, 43 (14.1%) in Ecology, 42 (13.8%) in Information
Technology and Research, 33 (10.8%) in Science.
Following ethics committee approval, the questionnaires were
administered at the beginning or end of a lecture, with the permission
of the lecturers, towards the end of second semester of first year. The
purpose of the questionnaire and the study were explained briefly to the
students. It was pointed out that course expectations and corresponding
course experiences were required for the eight sub-scales. The
questionnaires were anonymous and only grouped data would be reported.
Generally they took 15-20 minutes to complete. Only respondents who
supplied complete data sets were used in the study (except for about 12
that had a few missing responses).
Measurement
Seven measurement criteria have been set out by Wright and Masters
(1981) for creating a scale that measures a variable. First, each item
should be evaluated to see whether it functions as intended. Second, the
relative position (difficulty) of each valid item along the scale that
is the same for all persons should be estimated. Third, each
person's responses should be evaluated to check that they form a
valid response pattern. Fourth, each person's relative score
(attitude or achievement) on the scale should be estimated. Fifth, the
person scores and the item scores must fit together on a common scale
defined by the items and they must share a constant interval from one
end of the scale to the other so that their numerical values mark off
the scale in a linear way. Sixth, the numerical values should be
accompanied by standard errors which indicate the precision of the
measurements on the scale. Seventh, the items should remain similar in
their function and meaning from person to person and group to group so
that they are seen as stable and useful measures. These criteria are
used in creating the Quality of Student Experiences Scale.
Measurement model
The Extended Logistic Model of Rasch is used with the computer
program Rasch Unidimensional Measurement Models (RUMM) (Andrich,
Sheridan, Lyne, & Luo, 1998) to analyse the data. This model unifies
the Thurstone goal of item scaling with extended response categories for
items measuring, for example, course expectations and course
experiences, which are applicable to this study. Item difficulties and
student measures are placed on the same scale. The Rasch method produces
scale-free student measures and sample-free item difficulties (Andrich,
1988b; Wright & Masters, 1982). That is, the differences between
pairs of student measures and pairs of item difficulties are expected to
be sample independent.
The zero point on the scale does not represent zero expectation or
experience. It is an artificial point representing the mean of the item
difficulties, calibrated to be zero. It is possible to calibrate a true
zero point, if it can be shown that an item represents zero expectation
(or experience). There is no true zero point in the present study.
The RUMM program parameterises an ordered threshold structure,
corresponding with the ordered response categories of the items. The
thresholds are boundaries located between the response categories and
are related to the change in probability of responses occurring in the
two categories separated by the threshold. A special feature of this
version of the RUMM program is that the thresholds are re-parameterised
to create an ordered set of parameters which are directly related to the
Guttman principal components. With four categories, three item
parameters are estimated: location or difficulty ([delta]), scale
([theta]) and skewness ([eta]). The location specifies the average
difficulty of the item on the measurement continuum. The scale specifies
the average spread of the thresholds of an item on the measurement
continuum. The scale defines the unit of measurement for the item and,
ideally, all items constituting the measure should have the same scale
value. The skewness specifies the degree of modality associated with the
responses across the item categories.
The RUMM program substitutes the parameter estimates back into the
model and examines the difference between the expected values predicted
from the model and the observed values using two tests-of-fit: one is
the item-trait interaction and the second is the item-student
interaction.
The item-trait test-of-fit (a chi-square) examines the consistency
of the item parameters across the student estimates for each item and
data are combined across all items to give an overall test-of-fit. The
latter shows the collective agreement for all items across students of
differing measures.
The item-student test-of-fit examines both the response pattern of
students across items and items across students. It examines the
residual between the expected estimate and the actual values for each
student-item summed over all items for each student and summed over all
students for each item. The fit statistics approximate a standardised distribution with a mean expectation near zero and a variance near one,
when the data fit the model (Wright & Masters, 1982). Negative
values indicate a response pattern that fits the model too closely
(probably because dependencies are present, see Andrich, 1985) and
positive values indicate a poor fit to the model (probably because
`noise' or other measures are present).
Results
The results are set out in two figures, two tables and three
Appendixes. Figure 1 shows the graph of the measures of Quality of
Student Experiences for the 305 students and the difficulties of the 120
items on the same scale in logits (the log odds of answering
positively). Figure 2 shows the graph of the measures of Quality of
Student Experiences for the 305 students and the difficulties of the 62
items that fit the model on the same scale in logits. Table 1 gives a
summary of the Indices of Student Separation (the proportion of observed
variance considered true) and fit statistics for the 120 item scale
(where 58 items do not fit the model) and the 62 item scale (where all
items fit the model). Table 2 shows a summary of the range and mean item
difficulties for the sub-scales of the 62 item scale. Appendix I shows
the questionnaire items and the difficulties of the 62-item scale.
Appendix II shows, in probability order, the location on the continuum,
fit to the model and probability of fit to the model for the 62 item
scale. Appendix III shows the item thresholds.
[FIGURES 1 AND 2 OMITTED]
Psychometric characteristics of the Quality of Student Experiences
Scale
The 62 items relating to Quality of Student Experiences have a good
fit to the measurement model, which indicates a strong agreement between
all 305 students to the different locations of the items on the scale
(see Table 1 and Appendix II). That is, there is strong agreement among
the students to the item difficulties along the scale. The item
threshold values are ordered from low to high which indicates that the
students have answered consistently and logically with the ordered
response format (Appendix III). The Index of Student Separability for
the 62-item scale is 0.925. This means that the proportion of observed
variance considered true is 92.5 per cent. The difficulties of the items
have a similar spread along the scale to that of the student measures
(see Figure 2). This means that the items are targeted appropriately for
the students. The item-trait tests-of-fit indicate that the values of
the item difficulties are strongly consistent across the range of
student measures. The item-student tests-of-fit (see Table 1) indicate
that there is good consistency of student and item response patterns.
These data indicate that the errors are small and that the power of the
tests-of-fit is excellent.
However there is one problem and this involves the fit of the
sub-scales to the model; 38 out of 46 items for the sub-scales of The
Arts (11 out of 12), The Sciences (16 out of 18) and Writing (11 out of
16) did not fit the model. Students did not answer the response
categories in a logical and consistent way and they could not agree on
the difficulties of the items on the scale. This meant that science
students with high quality of experiences, for example, found items from
The Sciences easy and other students with high quality experiences found
items from The Sciences hard. It was concluded that these 38 items did
not contribute to the measurement of the variable with the other items
and so these 38 misfitting items were discarded. This meant, in effect,
that The Arts and The Sciences were not confirmed as main aspects of the
scale of Quality of Student Experiences, at least as measured with these
items.
It could be argued that the deletion of items relating to The Arts,
The Sciences and Writing means that the model of Quality of Student
Experiences has not been fitted to the data properly: that is, there is
a reduction in validity. All universities offer the arts, the sciences
and writing and many students, but not all would have experiences
relating to these. In the Rasch analysis, the counter claim is that the
data must fit the measurement model to be valid and produce a proper
scale. The items relating to The Arts, The Sciences and Writing do not
fit the measurement model and hence cannot be included in a proper scale
(at least as worded for this study). Now it maybe that someone can word
some items relating to the arts, the sciences and writing so that all
students, irrespective of subject area studied, can agree on their
difficulties on a proper scale.
The evidence from the Rasch analysis is that the 62-item Quality of
Experiences Scale is valid and reliable. It is suggested that the scale
is not context dependent and is relatively sample independent. That is,
the scale parameters do not depend on the students who answer the items
or on the opinions of the person who constructed the scale. This is a
necessary characteristic of a proper scale and is part of the logic of a
Rasch model. That all the paired expectation and experience items do not
fit the model, or that one of the pairs fit the model and the other does
not, does not invalidate the scale. On the contrary, only items that fit
the model can logically form part of a valid and proper Rasch developed
scale.
Meaning of the Quality of Student Experiences Scale
The 62 items that make up the variable Quality of Student
Experiences are conceptualised as `my expectation at the beginning of
university' and `my experiences during first year university',
measured at the same time, from eight main aspects of university life.
Only six of these aspects--My Course, The Library, My Lecturers, Student
Acquaintances, Writing and Vocations--are confirmed as contributing to
the variable. The 62 items used to measure the main six aspects define
the variable (see Appendix I). They have good content validity and they
are derived from a conceptual framework based on previous research and
theory. Although the difficulties of the various items within each
aspect vary, their mean values are in order from My Course (easiest),
Writing, The Library, Student Acquaintances, My Lecturers, to Vocations
(hardest) (see Table 2). This, together with the data relating to
reliability and fit to the measurement model, is strong evidence for the
construct validity of the variable. This means that the students'
responses to the 62 items are related sufficiently well to represent the
latent variable Quality of Student Experiences at university.
Discussion of the scale
The scale is created at the interval level of measurement with no
true zero point of item difficulty or student measure. Equal distances
on the scale between measures of Quality of Student Experiences
correspond to equal differences between the item difficulties on the
scale. Items at the easy end of scale (for example 3,4,5,6,11,12, see
Appendix I) are answered in agreement by nearly all the students. Items
at the hard end of the scale (for example 42,48,46,64,120, see Appendix
I) are answered in agreement only by those students who have high
measures of Quality of Student Experiences. In this sample of university
students, whereas most had good Quality of Student Experiences, there
were 81 who had less than adequate Quality of Student Experiences (see
Figure 2).
The 22 expectation items that fitted the model are mostly, though
not all, towards the easy end of the scale (see Appendix I). This means,
for example, that the majority of the students found it easy to say that
they expected to combine ideas from different sources of information in
preparing assignments, that they expected to summarise major points and
information from readings and notes, and that they expected to
participate in class discussions (My Course). They found it easy to say
that they expected to ask their lecturers for information about grades,
assignments and coursework (My Lecturers), easy to expect to write an
outline to organise the sequence of points and ideas in an assignment
(Writing), and relatively easy to expect to ask a librarian for help in
finding library materials (The Library).
Some of the expectation items are towards the hard end of the scale
where students need a high Quality of Student Experiences measure to
answer the items positively. For example, students found it difficult to
say that they expected to explain material to other students (My
Course), difficult to expect to discuss current events, research and
university issues with lecturers, and difficult to discuss career plans
and ambitions with lecturers (My Lecturers). They also, surprisingly,
found it difficult to say that they expected to have serious discussions
with students of differing political opinions (Student Acquaintances).
The items relating to experiences in My Course are mostly towards
the easy end of the scale. Students found it easy to say that they
combined ideas from different sources of information in preparing
assignments, easy to say that they asked questions in class discussions,
and easy to say that they participated in class discussions. They also
found it relatively easy to say that they did extra reading on topics
introduced in classes, that they studied course materials with other
students, and that they compared and contrasted different points of view
in their course. Although students found it easy to say that they used
library computers to find books and easy to prepare a list of references
using the library, they found it difficult to find interesting material
in the library just by browsing.
Most of the items relating to experiences for My Lecturers, Student
Acquaintances and Vocations were towards the hard end of the scale.
Students found it difficult to say that they have discussed their career
plans, ambitions, current events and research issues with their
lecturers (My Lecturers). They found it moderately hard to say that they
have had serious discussions with other students of different
backgrounds, personal background, philosophy or country (Student
Acquaintances). They found it difficult to say that they practised a
vocational task without supervision (or even with a lecturer present),
difficult to say that they identified a vocational problem and located
information about what to do to solve that problem, or even that they
read how to perform an occupational task.
Nearly all the items that fitted the model are easier in their
expectation mode than in their corresponding experience mode, as
conceptualised. Thus, although students found it easy to expect to
participate in class discussions, it was harder to experience this in
their courses (My Course); although they found it easy to expect to have
to ask a librarian for help in finding information, it was harder to
experience; although they found it moderately easy to expect to discuss
assignments with their lecturers, it was harder in experience (My
Lecturers). Although students found it moderately easy to expect to have
serious discussions with others from different backgrounds, it was
harder in actual experience (Student Acquaintances) and although
students found it moderately easy to say that they expected to have to
listen to a lecturer explain how to perform an occupational task, it was
harder to experience this.
The current Rasch measurement model analysis supports the
conceptual design of Quality of Student Experiences as based on a model
involving six main aspects. These are My Course, Writing, The Library,
My Lecturers, Student Acquaintances and Vocations. In line with this,
the analysis supports the view that Quality of Student Experiences can
be measured and used as a unidimensional variable based on items in the
expectation mode and actual experience mode. This stands in contrast to
claims, using traditional measurement techniques, that the Community
College Student Experiences Questionnaire (Ethington & Polizzi,
1996; Pascarella & Terenzini, 1991; Pace, 1979a, 1984, 1992) is
based on eight main aspects. The current analysis found that most of the
items of aspects involving The Arts, Writing and The Sciences did not
fit the model. The current analysis supports also the view that Quality
of Student Experiences is comprised of an expectation component as well
as an actual experiences component, such that the expectation items are
easier than their corresponding experience items. While 33 of the 34
experience items from the 6 main aspects contribute to the scale, only
21 of the corresponding expectation items contribute. This means that
the expectation and experience items contribute differently to the
scale.
Implications
Ethington and Polizzi (1996) claim that there is a `strong
relationship between the extent to which students become involved in the
academic and social systems of educational institutions and their
subsequent growth and development and attainment of their educational
goals' (p. 725) (see Pascarella & Terenzini, 1991, for a review
of this research). It is suggested that universities should foster
student involvement in a variety of educational activities (academic and
non-academic) and not just have a focus on the course and assessment.
Students learn more, stay longer in education at their university and
support their university more when they are involved in its varied
activities (Pascarella & Terenzini, 1991). For the university where
the current study was undertaken, strategies could be developed to
overcome the deficiencies noted below.
There is a question about interpreting the easy and difficult items
on the scale in a way that leads to improvements. Students who have
Quality of Experiences scores at a lower point on the scale than the
difficulty of an item have a low chance of answering that item
positively. However, this need not necessarily translate to the view
that this is a difficult item on which administrators can seek to make
an improved action. There are two reasons for this. The first is that
the wording of an item may contribute to its difficulty. For example,
the item `I expected to discuss my career plans, educational plans,
interests and ambitions with my lecturers' is difficult because
there is not the time or inclination for all lecturers to do this for
several hundred of their students. That this is a difficult item does
not translate directly into an administrative improvement issue. The
second reason is that this Rasch scale has no true zero point for
Quality of Experiences. Thus, if an item is easy, it could still lead to
administrative improvements. For example, the item `I expected to ask my
lecturers for information about grades, assignments, and course
work' is easy but is it easy enough? It could be argued that this
should be a very much easier item and that this could lead to an
administrative action for improvement.
The university, where the current study was performed, has an
excellent performing arts centre and its students present many fine
performances, a large number of which are free to students and staff.
Yet The Arts, as a main area of other student observation and
discussion, did not fit the measurement model and a majority of the
students in the current study had not seen a play, dance or musical.
They did not expect to talk with other students about The Arts and they
did not experience The Arts in their first year of university.
Similarly The Sciences, as a main area of student activity, did not
fit the measurement model and a majority of students in the current
study did not talk with other students about the social, ethical and
scientific issues of our times. Many students did not discuss or explain
the scientific basis for environmental concerns about such issues as
energy, pollution, recycling and genetics. They did not experience
having to explain scientific principles and procedures to others. Yet
the university has degree programs in various science areas and
engineering and attempts to promote the sciences.
In the main areas that did fit the model, there were a number of
items that students found to be difficult. These are aspects upon which
the university can improve its involvement for the benefit of the
university and students. Students found it difficult to agree that they
discussed current important issues, assignment comments, career paths,
interests, research and university issues with their lecturers. The
university could organise more free lectures and seminars on important
issues and research to improve student-lecturer interaction and student
involvement.
Students found it difficult to agree that they had practised,
demonstrated, explained or watched an occupational or vocational task.
The university could provide more focus, for a greater variety of
students, on links with vocations and occupations, where that is
appropriate.
The library is a central aspect of any university and is one of the
focus areas for staff and students. Yet, in the current study, students
found it difficult to say that they found some interesting material in
the library just by browsing in the stacks and they found it moderately
difficult to say that they had asked a librarian for help in finding
library materials. Furthermore the item relating to checking out books
from the library to read at home did not fit the model, which indicates
that many students from a range of experiences did not take books home
for interest from the university library.
An implication for further research is that a different wording
could be tried and analysed for those items where both expectations and
experiences do not fit the model, especially for The Arts, The Sciences
and Writing. This is because it may be unsatisfying to have a scale
without corresponding expectation and experience items, despite it being
technically alright. One may intuitively feel that the content validity
is better if both expectation and experience items fit the measurement
model for all items.
Conclusion
The Rasch model was useful in constructing a scale of Quality of
Student Experiences at university, with items relating to My Course, The
Library, My Lecturers, Student Acquaintances, Writing and Vocations. The
final scale of 62 items had good psychometric properties. The proportion
of observed variance considered true was 92.5 per cent. The threshold
values were ordered in correspondence with the ordering of the response
categories. The data fitted the measurement model so that the items are
arranged from easy to hard on the scale. Item difficulties and student
measures are calibrated on the same scale. Items related to The Arts and
The Sciences, and many from Writing were rejected since students could
not agree on the difficulties or there was misfit to the model. The
measurement requirements also meant that different numbers of
expectation and experience items fitted the model. Where expectation
items and corresponding experience items fitted the model, expectation
items were easier than experience items, as conceptualised. The scale is
sample independent; the item difficulties do not depend on the sample of
university students used or on the opinions of the person who
constructed the items. However, the student measures in this study are
only relevant to the university involved. Researchers involved with
student experiences at university and university administrators should
take note of the results of this study.
Keywords
attitude measures
expectation
measurement techniques
measures (individual)
student experiences
university students
Appendix I Australian Quality of Student Experiences Scale (Final
62 item scale)
Questionnaire: Expectations and experiences in university courses
Please rate the 120 statements, in relation to all the units (subjects)
studied in your course, according to the following response format.
Place a number corresponding to your expectation (at the beginning of
your course) and your experiences (during your course) on the
appropriate line opposite each statement:
In relation to all the units, or nearly all the units
(subjects), studied put 3
In relation to most of the units (subjects), studied put 2
In relation to some of the units (subjects), studied put 1
In relation to no units, or only one unit (subject),
studied put 0
Example
If your expectation, at the beginning of your course, was to
participate in class discussions in all your units, put 3
and, if you only experienced this in some units, put 1.
Item 1/2. I expect to participate in class
discussions 3 1
Expectation
Item at the
no. Item wording beginning
Sub-scale: My Course (18 items)
1/2 I expected to participate in class
discussions. -0.837
3/4 I expected to combine ideas from different
sources of information in preparing
assignments. -1.317
5/6 I expected to summarize major points and
information from readings or notes. -1.251
7/8 I expected that I would explain material
to other students. +0.336
9/10 I expected to do additional readings on
topics that were introduced in class or
lectures. -0.260
11/12 I expected to ask questions about points
made in class discussions, lectures or
readings. -0.889
13/14 I expected to study course materials
with other students. -0.233
15/16 I expected to compare and contrast
different points of view presented in
my course. -0.267
17/18 I expected to consider the accuracy and
credibility of information from
different sources. -0.099
Sub-scale: The Library (14 items)
19/20 I expected to use the library as a quiet
place to readand study materials. No fit
21/22 I expected to read newspapers, magazines
and journals located in the library. +0.129
23/24 I expected to check out books from the
library to read at home. No fit
25/26 I expected to use the library computers
to find books on topics that I wanted. No fit
27/28 I expected to have to prepare a list of
references for assignments, using the
library. No fit
29/30 I expected to have to ask a librarian for
help in finding library materials. -0.177
31/32 I expected to find some interesting
material in the library just by browsing
in the stacks. No fit
Sub-scale: My Lecturers (16 items)
33/34 I expected to ask my lecturers for
information about grades, assignments
and course work. -0.687
35/36 I expected to talk (sometimes and
briefly) with my lecturers after class
about course content. -0.207
37/38 I expected to make an appointment to
meet with my lecturers in his or her
office, sometimes. No fit
39/40 I expected to discuss my assignments
with my lecturers. -0.182
41/42 I expected to discuss my career plans,
educational plans interests and ambitions
with my lecturers. +0.580
43/44 I expected to discuss comments made by
lecturers on assignments that I wrote.
45/46 I expected to discuss with lecturers No fit
(sometimes and informally) current
events, research and university issues. +0.557
47/48 I expected to discuss performance,
difficulties or personal problems with
my lecturers. No fit
Sub-scale: Student Acquaintances (12 items)
49/50 I expected to have serious discussions
with students who are older and younger
than me. No fit
51/52 I expected to have serious discussions
with students whose ethnic or cultural
background is different from mine. -0.048
53/54 I expected to have serious discussions
with students whose philosophy of life &
personal values are different from mine. No fit
55/56 I expected to have serious discussions
with students whose political opinions
are different from mine. +0.320
57/58 I expected to have serious discussions
with students whose religious beliefs
are different from mine. No fit
59/60 I expected to have serious discussions
with students from a different country
from mine. +0.039
Sub-scale: The Arts (12 items)
61/62 I expected to talk about art (painting,
sculpture, artists architecture) with
other students at university. No fit
63/64 I expected to talk about music
(classical, popular) and musicians with
other students at university. No fit
65/66 I expected to talk about theatre (plays,
musicals and dance) with other students
at university. No fit
67/68 I expected to attend an art exhibition
at university. No fit
69/70 I expected to attend a concert or other
musical event at university. No fit
71/72 I expected to attend a play, dance
concert, or other theatrical performance
at university. No fit
Sub-scale: Writing (16 items)
73/74 I expected to use a dictionary to look
up the proper meaning, definition and
spelling of words. No fit
75/76 I expected to prepare an outline to
organize the sequence of ideas
and points in an assignment. -0.614
77/78 I expected to have to think about
grammar, sentence structure, paragraphs,
and word choice in assignments. No fit
79/80 I expected to have to write a rough draft
of an assignment and revise it, before
submitting it to my lecturer. No fit
81/82 I expected to use a computer in typing
and preparing my assignments. No fit
83/84 I expected to ask other people to read
something I wrote to see if it was clear
to them. No fit
85/86 I expected to spend at least 5 hours
(or more) writing an assignment. No fit
87/88 I expected to ask my lecturer for advice
and help to improve my assignment or to
explain comments written on my assignment. No fit
Sub-scale: The Sciences (18 items)
89/90 I expected to have to memorise formulae,
definitions and technical terms. No fit
91/92 I expected to practise and improve my
skills in using laboratory equipment. No fit
93/94 I expected to have to show another
university student how to use a piece of
scientific equipment. No fit
95/96 I expected to have to explain an
experimental procedure to another
university student. No fit
97/98 I expected to have to explain my
understanding of some scientific principle
by explaining it to other students. No fit
99/100 I expected to complete an experiment or
project using scientific methods. No fit
101/102 I expected to talk about social and
ethical issues relating to science and
technology (such as energy, pollution,
genetics) No fit
103/104 I expected to use information learned in
a science class to understand some
aspect of the world around us. No fit
105/106 I expected to have to explain to someone
the scientific basis for environmental
concerns about such issues as energy,
pollution, recycling and genetics. No fit
Sub-scale: Vocations (14 items)
107/108 I expected to read how to perform an
occupational task or vocational skill. No fit
109/110 I expected to have to listen to a lecturer
explain how to perform an occupational
task or vocational skill. -0.066
111/112 I expected to watch a lecturer demonstrate
an occupational task or vocational skill. -0.030
113/114 I expected to practise an occupational task
or vocational skill monitored by a
lecturer or other student. No fit
115/116 I expected to practise an occupational skill
or vocational task without supervision. No fit
117/118 I expected to identify a vocational problem
and locate information about what to do
to solve the problem. No fit
119/120 I expected to diagnose a vocational problem
and carry out an appropriate procedure
without consultation. No fit
Experiences
Item during
no. Item wording the course
Sub-scale: My Course (18 items)
1/2 I expected to participate in class
discussions. -0.597
3/4 I expected to combine ideas from different
sources of information in preparing
assignments. -1.257
5/6 I expected to summarize major points and
information from readings or notes. -0.835
7/8 I expected that I would explain material
to other students. +0.041
9/10 I expected to do additional readings on
topics that were introduced in class or
lectures. -0.228
11/12 I expected to ask questions about points
made in class discussions, lectures or
readings. -0.738
13/14 I expected to study course materials
with other students. -0.220
15/16 I expected to compare and contrast
different points of view presented in
my course. -0.140
17/18 I expected to consider the accuracy and
credibility of information from
different sources. -0.053
Sub-scale: The Library (14 items)
19/20 I expected to use the library as a quiet
place to readand study materials. No fit
21/22 I expected to read newspapers, magazines
and journals located in the library. +0.207
23/24 I expected to check out books from the
library to read at home. No fit
25/26 I expected to use the library computers
to find books on topics that I wanted. -0.614
27/28 I expected to have to prepare a list of
references for assignments, using the
library. -0.747
29/30 I expected to have to ask a librarian for
help in finding library materials. +0.201
31/32 I expected to find some interesting
material in the library just by browsing
in the stacks. +0.565
Sub-scale: My Lecturers (16 items)
33/34 I expected to ask my lecturers for
information about grades, assignments
and course work. -0.400
35/36 I expected to talk (sometimes and
briefly) with my lecturers after class
about course content. +0.202
37/38 I expected to make an appointment to
meet with my lecturers in his or her
office, sometimes. +0.828
39/40 I expected to discuss my assignments
with my lecturers. +0.141
41/42 I expected to discuss my career plans,
educational plans interests and ambitions
with my lecturers. +1.170
43/44 I expected to discuss comments made by
lecturers on assignments that I wrote. +0.489
45/46 I expected to discuss with lecturers
(sometimes and informally) current
events, research and university issues. +0.986
47/48 I expected to discuss performance,
difficulties or personal problems with
my lecturers. +1.050
Sub-scale: Student Acquaintances (12 items)
49/50 I expected to have serious discussions
with students who are older and younger
than me. -0.055
51/52 I expected to have serious discussions
with students whose ethnic or cultural
background is different from mine. +0.142
53/54 I expected to have serious discussions
with students whose philosophy of life &
personal values are different from mine. +0.169
55/56 I expected to have serious discussions
with students whose political opinions
are different from mine. No fit
57/58 I expected to have serious discussions
with students whose religious beliefs
are different from mine. No fit
59/60 I expected to have serious discussions
with students from a different country
from mine. +0.261
Sub-scale: The Arts (12 items)
61/62 I expected to talk about art (painting,
sculpture, artists architecture) with
other students at university. No fit
63/64 I expected to talk about music
(classical, popular) and musicians with
other students at university. +0.921
65/66 I expected to talk about theatre (plays,
musicals and dance) with other students
at university. No fit
67/68 I expected to attend an art exhibition
at university. No fit
69/70 I expected to attend a concert or other
musical event at university. No fit
71/72 I expected to attend a play, dance
concert, or other theatrical performance
at university. No fit
Sub-scale: Writing (16 items)
73/74 I expected to use a dictionary to look
up the proper meaning, definition and
spelling of words. -0.326
75/76 I expected to prepare an outline to
organize the sequence of ideas
and points in an assignment. -0.535
77/78 I expected to have to think about
grammar, sentence structure, paragraphs,
and word choice in assignments. No fit
79/80 I expected to have to write a rough draft
of an assignment and revise it, before
submitting it to my lecturer. No fit
81/82 I expected to use a computer in typing
and preparing my assignments. No fit
83/84 I expected to ask other people to read
something I wrote to see if it was clear
to them. -0.031
85/86 I expected to spend at least 5 hours
(or more) writing an assignment. No fit
87/88 I expected to ask my lecturer for advice
and help to improve my assignment or to
explain comments written on my assignment. +0.135
Sub-scale: The Sciences (18 items)
89/90 I expected to have to memorise formulae,
definitions and technical terms. +0.105
91/92 I expected to practise and improve my
skills in using laboratory equipment. No fit
93/94 I expected to have to show another
university student how to use a piece of
scientific equipment. No fit
95/96 I expected to have to explain an
experimental procedure to another
university student. No fit
97/98 I expected to have to explain my
understanding of some scientific principle
by explaining it to other students. No fit
99/100 I expected to complete an experiment or
project using scientific methods. No fit
101/102 I expected to talk about social and
ethical issues relating to science and
technology (such as energy, pollution,
genetics) +0.795
103/104 I expected to use information learned in
a science class to understand some
aspect of the world around us. No fit
105/106 I expected to have to explain to someone
the scientific basis for environmental
concerns about such issues as energy,
pollution, recycling and genetics. No fit
Sub-scale: Vocations (14 items)
107/108 I expected to read how to perform an
occupational task or vocational skill. +0.569
109/110 I expected to have to listen to a lecturer
explain how to perform an occupational
task or vocational skill. +0.170
111/112 I expected to watch a lecturer demonstrate
an occupational task or vocational skill. +0.312
113/114 I expected to practise an occupational task
or vocational skill monitored by a
lecturer or other student. +0.430
115/116 I expected to practise an occupational skill
or vocational task without supervision. +0.657
117/118 I expected to identify a vocational problem
and locate information about what to do
to solve the problem. +0.620
119/120 I expected to diagnose a vocational problem
and carry out an appropriate procedure
without consultation. +0.813
Notes
(1) The difficulties are in logits (the log odds of successfully
answering the item).
(2) Negative logit values indicate easy items.
(3) Positive logit values indicate hard items.
APPENDIX II Individual item fit (62 items)
Label Location SE Residual ChiSq Probab
Ex006 1006 -0.835 0.08 0.565 0.539 0.908
Ex032 1032 0.565 0.06 0.564 0.545 0.906
Ex111 1111 -0.030 0.06 0.350 0.577 0.899
Ex109 1109 -0.066 0.06 0.406 0.687 0.872
Ex045 1045 0.557 0.06 0.319 0.698 0.870
Ex035 1035 -0.207 0.06 0.482 0.701 0.869
Ex088 1088 0.135 0.06 0.216 0.739 0.860
Ex007 1007 0.336 0.07 0.315 0.761 0.854
Ex075 1075 -0.614 0.07 -0.001 1.001 0.795
Ex050 1050 -0.055 0.06 0.985 1.038 0.786
Ex041 1041 0.580 0.06 0.498 1.131 0.762
Ex102 1102 0.795 0.06 0.264 1.165 0.754
Ex110 1110 0.170 0.07 0.179 1.178 0.751
Ex112 1112 0.312 0.06 -0.078 1.219 0.741
Ex048 1048 1.050 0.07 1.218 1.252 0.733
Ex039 1039 -0.182 0.06 1.605 1.257 0.731
Ex022 1022 0.207 0.06 1.142 1.373 0.703
Ex030 1030 0.201 0.06 1.352 1.391 0.699
Ex076 1076 -0.535 0.07 -0.264 1.544 0.662
Ex008 1008 0.041 0.08 0.625 1.599 0.649
Ex108 1108 0.569 0.06 1.225 1.612 0.646
Ex059 1059 0.039 0.06 -0.060 1.735 0.618
Ex018 1018 -0.053 0.07 0.000 1.792 0.605
Ex034 1034 -0.400 0.07 -0.393 1.885 0.584
Ex055 1055 0.320 0.06 0.505 1.956 0.569
Ex033 1033 -0.687 0.07 -0.225 1.995 0.560
Ex011 1011 -0.889 0.08 -0.739 2.215 0.514
Ex021 1021 0.129 0.06 2.168 2.443 0.470
Ex054 1054 0.169 0.06 0.397 2.568 0.447
Ex040 1040 0.141 0.07 -0.375 2.765 0.412
Ex014 1014 -0.220 0.07 0.562 2.770 0.411
Ex010 1010 -0.228 0.07 1.399 3.202 0.342
Ex042 1042 1.170 0.07 -0.418 3.243 0.336
Ex060 1060 0.261 0.06 0.001 3.281 0.330
Ex051 1051 -0.048 0.06 0.253 3.350 0.320
Ex029 1029 -0.177 0.06 1.410 3.453 0.306
Ex116 1116 0.657 0.06 -0.326 3.519 0.297
Ex120 1120 0.813 0.07 -0.492 3.701 0.274
Ex012 1012 -0.738 0.08 -0.442 3.711 0.273
Ex002 1002 -0.597 0.08 0.316 3.716 0.272
Ex009 1009 -0.260 0.06 1.962 3.804 0.261
Ex015 1015 -0.267 0.07 -0.558 3.982 0.241
Ex003 1003 -1.317 0.09 -0.021 4.027 0.236
Ex004 1004 -1.257 0.09 0.987 4.028 0.236
Ex013 1013 -0.233 0.07 1.495 4.109 0.227
Ex114 1114 0.430 0.06 0.030 4.121 0.226
Ex017 1017 -0.099 0.07 0.755 4.397 0.198
Ex052 1052 0.142 0.06 0.374 4.728 0.168
Ex038 1038 0.828 0.07 -0.391 4.739 0.167
Ex005 1005 -1.251 0.09 1.157 4.921 0.152
Ex044 1044 0.489 0.06 -0.558 4.965 0.149
Ex016 1016 -0.140 0.08 -0.048 4.984 0.148
Ex036 1036 0.202 0.07 -0.230 5.377 0.120
Ex001 1001 -0.837 0.07 0.811 5.759 0.097
Ex046 1046 0.986 0.07 -0.582 5.811 0.094
Ex028 1028 -0.747 0.07 0.957 5.920 0.088
Ex084 1084 -0.031 0.06 1.733 7.288 0.034
Ex118 1118 0.620 0.06 -1.114 9.043 0.000
Ex090 1090 0.105 0.06 3.272 11.703 0.000
Ex064 1064 0.921 0.07 2.252 13.587 0.000
Ex074 1074 -0.326 0.06 1.287 14.656 0.000
Ex026 1026 -0.614 0.07 1.697 15.513 0.000
Notes
(1) Location is the item difficult on the scale.
(2) Residual is the observed response minus the expected value.
(3) Probab is the chi-square probability fit to the model.
It is sensitive to sample size and should not be interpreted
too strictly.
APPENDIX III Uncentralised thresholds (62 items)
1 2 3
Ex001 1001 -2.005 -.450 -.057
Ex002 1002 -2.217 -.432 .857
Ex003 1003 -1.931 -1.384 -.634
Ex004 1004 -2.467 -1.214 -.090
Ex005 1005 -1.970 -1.306 -.479
Ex006 1006 -1.703 -.888 .085
Ex007 1007 -.689 .576 1.121
Ex008 1008 -1.704 .175 1.651
Ex009 1009 -.532 -.306 .057
Ex010 1010 -1.068 -.265 .648
Ex011 1011 -1.959 -.726 .018
Ex012 1012 -2.524 -.370 .680
Ex013 1013 -.886 -.326 .515
Ex014 1014 -1.117 -.250 .706
Ex015 1015 -1.040 -.680 .918
Ex016 1016 -1.662 -.231 1.472
Ex017 1017 -.785 -.431 .917
Ex018 1018 -1.196 -.268 1.304
Ex021 1021 -.028 .034 .381
Ex022 1022 -.452 .144 .928
Ex026 1026 -1.122 -.585 -.134
Ex028 1028 -.904 -.779 -.557
Ex029 1029 -.686 .039 .115
Ex030 1030 -.337 .202 .739
Ex032 1032 .036 .334 1.324
Ex033 1033 -.859 -.736 -.466
Ex034 1034 -1.266 -.238 .304
Ex035 1035 -.781 -.229 .389
Ex036 1036 -.833 .304 1.136
Ex038 1038 .244 .988 1.253
Ex039 1039 -.507 -.242 .203
Ex040 1040 -.678 .167 .934
Ex041 1041 .369 .432 .939
Ex042 1042 .676 .956 1.878
Ex044 1044 -.362 .638 1.192
Ex045 1045 .288 .434 .949
Ex046 1046 .183 1.066 1.710
Ex048 1048 .638 .772 1.742
Ex050 1050 -.668 -.042 .545
Ex051 1051 -.481 -.167 .505
Ex052 1052 -.324 .155 .595
Ex054 1054 -.391 .013 .883
Ex055 1055 .145 .178 .638
Ex059 1059 -.205 -.159 .481
Ex060 1060 -.158 .439 .502
Ex064 1064 .594 .942 1.228
Ex074 1074 -.763 -.190 -.026
Ex075 1075 -.768 -.618 -.455
Ex076 1076 -1.162 -.797 .355
Ex084 1084 -.540 .057 .390
Ex088 1088 -.630 .310 .725
Ex090 1090 .064 .103 .147
Ex102 1102 .628 .724 1.034
Ex108 1108 -.033 .485 1.256
Ex109 1109 -.186 -.443 .431
Ex110 1110 -.644 .184 .970
Ex111 1111 -.167 .012 .066
Ex112 1112 -.243 .152 1.026
Ex114 1114 .047 .260 .984
Ex116 1116 .234 .261 1.477
Ex118 1118 .192 .416 1.251
Ex120 1120 .410 .585 1.444
No.of items = 62. No. of students = 305.
Table 1 Summary data of the reliabilities and fit statistics to the
model for the 120 item and 62 item scales (N = 305)
120 item scale 62 item scale
Non-fitting items 58 none
Disordered thresholds 58 none
Index of Student Separability n.a. 0.93
Item-trait interaction (chi-square) 548 (p<0.001) 223 (p<0.05)
Item fit statistic M +0.336 +0.480
SD +1.005 +0.848
Student fit statistic M -0.23 -0.171
SD +2.577 +2.092
Power of test-of-fit n.a. excellent
Notes
(1) The Index of Student Separation is the proportion of observed
variance that is considered true.
(2) The item and student fit statistics have an expectation of
a mean near zero and a standard deviation near one, when the data
fit the model.
(3) The item-trait interaction test is a chi-square. The results
indicate that there is good collective agreement for all items
across students of differing Quality of Student Experiences.
Table 2 Range and mean scores for the sub-scales in the 62 item scale
Mean
Sub-scale Highest score Lowest score score
My Course (n=18/18 +0.336 -1.317 -0.491
Writing (n=5/16 +0.135 -0.614 -0.274
The Library (n=7/14) +0.565 -0.747 -0.062
Student Acquaintances(n=7/12) +0.261 -0.055 +0.118
My Lecturers (n=13/16) +1.170 -0.687 +0.348
Vocations (n=9/14) +0.813 -0.066 +0.386
The Arts (n=1/12) n.a. n.a. n.a.
The Sciences (n=2/18) n.a. n.a. n.a.
Notes
(1) Numbers in brackets represent items fitting the model out of the
total possible.
(2) Table scores are in logits.
References
Ajzen, I. (1989). Attitude structure and behaviour. In A.
Pratkanis, A. Breckler, & A. Greenwald (Eds.), Attitude structure
and function (pp. 241-274). Hillside, NJ: Lawrence Erlbaum.
Andrich, D. (1982). Using latent trait measurement to analyse
attitudinal data: A synthesis of viewpoints. In D. Spearitt (Ed.), The
improvement of measurement in education and psychology (pp 89-126).
Melbourne: ACER.
Andrich, D. (1985). A latent trait model for items with response
dependencies: Implications for test construction and analysis. In S.E.
Embretson (Ed.), Test design: Developments in psychology and
psychometrics (pp. 245-275). Orlando, FL: Academic Press.
Andrich, D. (1988a). A general form of Rasch's extended
logistic model for partial credit scoring. Applied Measurement in
Education, 1 (4), 363-378.
Andrich, D. (1988b). Rasch models for measurement (Sage university
paper on quantitative applications in the social sciences, series number
07/068). Newbury Park, CA: Sage.
Andrich, D. (1989). Distinctions between assumptions and
requirements in measurement in the social sciences. In J.A.Keats,
R.Taft, R.A.Heath, & S.Lovibond (Eds.), Mathematical and theoretical
systems (pp 7-16). Amsterdam: Elsevier Science Publishers.
Andrich, D., Sheridan, B., Lyne, A., & Luo, G. (1998). RUMM: A
windows-based item analysis program employing Rasch unidimensional
measurement models. Perth: Murdoch University.
Burke, B.D. (1986a). Difficulties experienced by overseas
undergraduates and the provision of appropriate support services. Paper
presented at the Annual Conference of the Australian and New Zealand Comparative and International Education Society in Brisbane, December 1986.
Burke, B.D. (1986b). Experiences of overseas undergraduate
students. Unpublished paper. Sydney: University of NSW, Student
Counselling and Research Unit.
Burke, B.D. (1988a). From the airport or an Australian high school:
Different experiences of overseas students in their first-year at
university. Paper presented to the Asia-Pacific Conference on Student
Affairs, organised by the Hong Kong Student Services Association in Hong
Kong, July 1988.
Burke, B.D. (1988b). The responsibilities of institutions admitting
full-fee overseas students. Paper presented at the seminar, Full
fee-paying overseas students and institutional responsibility, sponsored
by the Australian and New Zealand Student Services Association in
Melbourne, April 1988.
Burke, B.D. (1990). Meeting the reasonable expectations of overseas
students. Paper presented at the International Education Seminar on
Overseas Students in Adelaide, South Australia, July 1990.
Conway, M. & Ross, M. (1984). Getting what you want by revising
what you had. Journal of Personality and Social Psychology, 47, 738-748.
Divgi, D.R. (1986). Does the Rasch model really work for multiple
choice items? Not if you look closely. Journal of Educational
Measurement, 23(4), 283-298.
Ethington, C.A. & Polizzi, T.B. (1996). An assessment of the
construct validity of the CCSEQ Quality of Effort Scales. Research in
Higher Education, 37(6) 711-730.
Fishbein, M. & Ajzen, I. (1975). Belief, attitude, intention
and behaviour. Manila, Philippines: Addison Wesley.
Friedlander, J., Pace, C., & Lehman, P. (1990). Community
College Student Experiences Questionnaire. Memphis: University of
Memphis, Centre for the Study of Higher Education.
Goldstein, H. (1980). Dimensionality, bias, independence and
measurement scale problems in latent trait test score models. British
Journal of Mathematical and Statistical Psychology, 33, 234-246.
Johnson, T. (1997). The 1996 Course Experience Questionnaire.
Parkville, Vic.: Graduate Careers Council of Australia.
Lehman, P.W. (1991). Assessing the quality of community college
student experiences: A new measurement instrument. Unpublished doctoral
dissertation. University of California, Graduate School of Education,
Los Angeles.
Lehman, P.W. (1992). CCSEQ: Test manual and comparative data. Los
Angeles, CA: University of California, Centre for the Study of
Evaluation
McInnes, C., James, R., & McNaught, C. (1995). First year on
campus: Diversity in the initial experiences of Australian
undergraduates. Canberra: AGPS.
Mullins, G., Quintrell, N., & Hancock, L. (1995). The
experiences of international and local students at three Australian
universities. Higher Education Research and Development, 14 (2),
201-231.
Pace, C.R. (1979a). The College Student Experiences Questionnaire.
Bloomington: Indiana University, School of Education.
Pace, C.R. (1979b). Measuring outcomes of college: Fifty years of
findings and recommendations for the future. Sail Francisco:
Jossey-Bass.
Pace, C.R. (1984). Measuring the quality of student experiences.
Los Angeles, CA: University of California, Graduate School of Education,
Higher Education Research Institute.
Pace, C.R. (1992). College Student Experiences Questionnaire: Norms for the third edition, 1990. Los Angeles, CA: University of California,
Centre for the Study of Evaluation.
Pascarella, E.T. & Terenzini, P.T. (1991). How college affects
students: Findings and insights from twenty years of research. San
Francisco, CA: Jossey-Bass.
Quintrell, N. (1990). A survey of overseas students during their
first year at Flinders University 1989. Unpublished paper, Flinders
University, Health and Counselling Service, Adelaide.
Quintrell, N. (1991). The experiences of international students at
Flinders University: Report of surveys 1988-1990. Unpublished paper.
Flinders University, Health and Counselling Service, Adelaide.
Quintrell, N. (1992). The experiences of international students at
Flinders University: Report of surveys 1988-1991. Unpublished paper.
Flinders University, Health and Counselling Service, Adelaide.
Rasch, G. (1980). Probabilistic models for intelligence and
attainment tests (rev. ed.). Chicago: University of Chicago Press.
(Original work published 1960)
Ross, M. (1989). Relation of implicit theories to the construction
of personal histories. Psychological Review, 96, 341-357.
Traub, R.E. (1983). A priori considerations in choosing an item
response model. In R.K. Hambleton (Ed.), Applications of item response
theory (pp. 57-70). Vancouver: Educational Research Institute of British
Colunbia.
Waugh, R. (1998). The Course Experience Questionnaire: A Rasch
measurement model analysis. Higher Education Research and Development,
17(1), 45-64.
Waugh, R. (1999a). Approaches to studying for students in higher
education. British Journal of Educational Psychology, 69(1), 63-79.
Waugh, R. (1999b). A revised Course Experience Questionnaire for
student evaluation of university courses. In T. Hand & K. Trembath
(Eds.), The Course Experience Questionnaire Symposium 1998 (pp. 61-80).
Canberra: Department of Education, Training and Youth Affairs, Higher
Education Division.
Wilson, L., Lizzio, A., & Ramsden, P. (1997). The development,
validation and application of the Course Experience Questionnaire.
Studies in Higher Education, 22 (1), 33-53.
Wright, B.D. (1985). Additivity in psychological measurement. In
E.E. Roskam(Ed.), Measurement and personality assessment (pp. 101-112).
Amsterdam: Elsevier Science Publishers.
Wright, B. & Masters, G. (1981). The measurement of knowledge
and attitude (Research memorandum no. 30). Chicago: University of
Chicago, Department of Education.
Wright, B. & Masters, G.(1982). Rating scale analysis: Rasch
measurement. Chicago: MESA Press.
Dr Russell Waugh is a Senior Lecturer in the School of Education,
Edith Cowan University, Pearson Street, Churchlands, Western Australia 6018.