Curricula assessment using the course diagnostic survey: a proposal.
Jackson, William T. ; Jackson, Mary Jo ; Gaulden, Corbett F. Jr. 等
ABSTRACT
The purpose of this paper was to develop a model of an alternative
approach to assessing courses and ultimately curricula. Borrowing from
Job Characteristics Theory, a modified survey, the Course Diagnostics
Survey (CDS) was developed. Using this instrument, a model is suggested
that measures the attitudes and resulting outcomes at both the course
and overall program level. This model suggests a roadmap as to course or
program components that directly impact desired outcomes. Hypotheses are
suggested to study the potential of the CDS as an appropriate tool for
assessment.
INTRODUCTION
Assessment of student learning has moved to the forefront of
business schools over the last decade (Palomba & Banta, 1999, 2001;
Banta, Lund, Black & Oblander, 1996). Much of this new emphasis is
directly attributable to AACSB International expectations. In fact, many
perceive this emphasis to increase with the new standards recently
adopted that focus on assurance of learning (Black & Duhon, 2003;
Mirchandani, Lynch & Hamilton, 2001; Michlitsch & Sidle, 2002).
Most schools seeking to assess student learning fall back on
administering standardized tests, imbedding measurements within courses
or conducting post surveys. An almost universal approach has been to
survey students with either the periodic student evaluations
administered each semester or an instrument that is locally prepared
that asks a series of questions to provoke attitudinal responses. Many
of these instruments lack a high level of internal validity.
The purpose of this research is to suggest a model using a
modification of an instrument that has already proven to be valid and
reliable at measuring motivational aspects of a job. The instrument to
be recommended, the Course Diagnostic Survey (CDS), is adapted from the
Job Diagnostic Survey. This instrument addresses what many, such as
Charles Duke (2002), see as being as important as actual content
absorption-student perceptions. As will be presented in subsequent
sections of this paper, the CDS focuses on how the course design creates
unique psychological states (student feeling toward their educational
environment) and thus creates affective outcomes (satisfaction or lack
there of).
BACKGROUND
No longer content with just technical competence from our business
school graduates, employers are now demanding "... skills in
leadership, problem solving, oral and written communication, along with
attributes of motivation and assertiveness" (Fontenot, Haarhues
& Hoffman, 1991, p. 56). However, the ability of our institutions of
higher education to meet these changing demands has been severely
questioned. Harvard President Emeritus, Derek Bok (1992) has chided our
universities for their failure to even examine the effectiveness of
their educational programs.
Fortunately, one stream of research has begun to investigate the
effectiveness of selected programs using cognitive scales for this
purpose in business curriculums. Using their Skills/Career Usefulness
scale, Fontenot, et al. (1991) studied the effectiveness of Small
Business Institute (SBI) courses and Business Policy courses in
developing desired student skills. Using job analysis and design
techniques developed for work environments, Watts and Jackson (1995)
investigated the applicability of Hackman and Oldham's (1976) Job
Characteristic Theory to course design. Job Characteristic Theory has
also been used to assess an institution's student evaluation of
instruction (Watts, 1992), and to analyze the effect of course redesign on SBI student outcomes (Watts, Jackson & Box, 1995).
The JDS proposes that positive results will result in the work
place (high motivation, high satisfaction with the job and high
performance level) when three critical psychological states (experienced
meaningfulness of the job, experienced responsibility for the outcomes
of the job, and knowledge of the job results) exist. The theory goes on
to suggest that the three critical psychological states are created by
specific core job characteristics being present. These core job
characteristics include: skill variety, task identity, task
significance, high levels of autonomy, and effective feedback. However,
not all individuals will respond equally, but rather are influenced by
their own growth need strength-how important is the job to each person
individually. This model is presented below in Figure 1.
As can be seen in Table 1 below, there appears to be an intuitive
relationship between what occurs in the job setting to that in the
academic classroom. While this relationship may not be exact, it does
offer promising possibilities.
METHOD
Subjects
The subjects for this study were 586 undergraduate and graduate
students in the school of business of a small southwest regional
university. Students were represented across all academic disciplines,
age distribution, sex, and ethnic background. This number represented
nearly 100 percent of all students enrolled in the school. Students were
asked to complete the Course Diagnostic Survey. No incentive or penalty
was provided for participation in the survey. During this initial phase
of the study and for statistical comparison, the results of all
participants were combined into one group.
Instrument
As previously mentioned, the instrument used was a modified Job
Diagnostic Survey (JDS) resulting is the Course Diagnostic Survey (CDS).
The instrument was used to collect perceptions of core course
characteristics, critical psychological states, growth need strength,
internal academic motivation and course satisfaction. Few modifications
were needed to apply the original instrument to the academic environment
being examined in this study. Seven point scales was used to maintain
consistency with the JDS. This approach has proven to be valid in
several other studies involving students in the academic setting (Watts,
1992; Watts, Jackson & Box, 1995; Watts & Jackson, 1995;
Fontenot, Haarhues and Hoffman, 1991).
Course Components
Course components were measured with the seven point scale very
inaccurate to very accurate in response to "how much do you agree
with the statement". The course component skill variety was
measured by the items "the course requires me to use a number of
high and complex skills" and a reverse score of "the course is
quite simple and repetitive". Task identity was measured by
"the course provides me the chance to completely finish the pieces
of work I begin" and a reverse score on the item "The course
is arranged so that I do not have the chance to do an entire piece of
work from beginning to end".
Task significance was indicated through responses on "This
course is one where a lot of other people can be affected by how well
the work gets" and a reverse response on "The course itself is
not very significant or important in the broader scheme of things".
Autonomy was shown through the items "The course gives me
considerable opportunity for independence and freedom in how I do the
work" and "The course denies me any chance to use my personal
initiative or judgment in carrying out the work (reverse scored)".
The final two components, feedback from the course and feedback
from the instructor, were measured respectively by "Just doing the
work required by the course provides many chances for me to figure out
how well I am doing", "The course itself provides very few
clues about whether or not I am performing well (reversed)" and
"The instructor often lets me know how well I am performing",
"The instructor and fellow students in this course almost never
give me any feedback about how well I am doing in my work
(reversed)".
The table and figure below illustrate the means and standard
deviations of the sample population for the six course characteristics
i.e. skill variety, task identity, task significance, autonomy, feedback
from the course, feedback from the instructor.
[FIGURE 2 OMITTED]
Critical Psychological States
The critical psychological states inspired by the course components
were also measured in this study. As stated before, the CDS used a seven
point Likert scale. The scale was a measure of how well the student
agreed with the statement and scales ranged from very inaccurate to very
accurate.
The psychological state meaningfulness was indicated by two items,
"The work I do in this course is very meaningful to me" and
"Most of the things I have to do in this course seem useless or
trivial" which was reverse scored. Responsibility was measured by
"I feel a high degree of personal responsibility for the work I do
in this course", "I feel I should personally take the credit
or blame for the results of my work in this course", "Whether
or not course work gets done right is clearly my responsibility"
and the reversed item "It's hard, in this course, for me to
care very much about whether or not the work gets done right". The
last psychological state measured, knowledge of results, was indicated
by "I usually know whether or not my work is satisfactory in this
course" and "I often have trouble figuring out whether
I'm doing well or poorly in this course" which was reverse
scored.
The table and figure below indicate the means and standard
deviations of the three psychological states, meaningfulness,
responsibility, and knowledge of results.
[FIGURE 3 OMITTED]
Student Outcomes
The third element component of the CDS was a measure of two student
outcomes. These included general satisfaction with the course and
student motivation. The seven point Likert scale indicated how well the
student agreed with the statement and scales ranged from very inaccurate
to very accurate.
General satisfaction with the course was measured by three items.
These included "Generally speaking, I am very satisfied with this
course", "I am generally satisfied with the kind of work
required in the course" and "I frequently think of dropping
this course" that was reverse scored. The outcome, student
motivation, was determined by responses on "My opinion of myself
goes up when I do the course work well", "I feel a great sense
of personal satisfaction when I do the course work well", "I
feel bad and unhappy when I discover that I have performed poorly in
this course", and the reverse scored item "My own feelings
generally are not affected much one way or the other by how well I do in
this course".
The means and standard deviations of the student responses for the
general satisfaction and motivation outcomes are shown below.
[FIGURE 4 OMITTED]
Motivating Potential Score
The Motivating Potential Score indicates the motivating potential
of a job or with the CDS, a course. It would be measured by the
responses of students in individual courses and calculated by the
formula:
PROCEDURE
To capture the influences of course related activities, the
instrument was administered late in the semester. On a predetermined date, instructors announced in class that students had been asked to
participate in an important study and read the following instructions:
This questionnaire was developed as part of a study of
course-related activities and how students react to them. The
questionnaire helps to determine how courses can be better designed
by obtaining information about how students react to different
kinds of course-related activities.
On the following pages you will find several different kinds of
questions about your course. Specific instructions are given at the
start of each section; please read them carefully. It should take
no more than 15 minutes to complete the entire questionnaire.
Please move through it quickly.
The questions are designed to obtain your perceptions of
course-related activities and your reactions to them. There are no
trick questions. Your individual answers will be kept completely
confidential. Please answer each item as honestly and frankly as
possible.
Thank you for you cooperation.
DISCUSSION
As stated in the introduction, the main purpose of this exploratory
study was to propose the use of the Course Diagnostic Survey instrument
as a means of assessing students in an academic setting. As the original
instrument was intended to assess the impact of redesign of jobs, the
modification and resulting CDS could equally be as successful in
assessing the impact of specific course components on the psychological
states and outcomes in the educational setting. While it appears that
the instrument has potential in this area, additional study is needed to
validate the instrument in an educational setting. Specifically, three
hypotheses are proposed.
H1: The CDS course components, i.e. skill variety, skill identity,
task significance, autonomy, feedback from course, and feedback from
instructor; lead to the indicated psychological states.
H2: The critical psychological states in the academic setting as
measured by the CDS will be related to general satisfaction and
motivation.
H3: The MPS as measured by the CDS will indicate the motivating
potential of a specific course. Future studies using the study sample,
as well as other samples, should attempt to show the relationships
indicated in the hypotheses. This would assist in the validation of the
CDS and its use in course assessment. It is also suggested that
additional studies consider the use of the model in overall program
development.
REFERENCES
Ater, E.C. & Coulter, K.L. (1980). Consumer internships:
Encouraging consumer/business dialogue. Journal of Business
Communications, 17(2), 33-39.
Banta, T.W., Lund, J.P., Black, K. & Oblander, F.W. (1996).
Assessment in Practice. San Francisco: Jossey-Bass Publishers.
Barnett, S.T., Dascher, P.E. & Nicholson, C.Y. (2004). Can
school oversight adequately assess department outcomes? A study of
marketing curriculum content. Journal of Education for Business, 79(3),
157-162.
Black, H.T. & Duhon, D.L. (2003). Evaluation and improving
student achievement in business programs: The effective use of
standardized assessment tests. Journal of Education for Business, 79(2),
90-98.
Bok, D. (1992). Reclaiming the public trust. Change, August, 23-29.
Cordery, J.L. & Sevastos, P.P. (1993). Responses to the
original and revised job diagnostic survey. Journal of Applied
Psychology, 78(1), 141-143.
Davis, M.A., Curtis, M.B. & Tschetter, J.D. (2003). Evaluating
cognitive training outcomes: Validity and utility of structural
knowledge assessment. Journal of Business & Psychology, 18(2), 191.
Duke, C.R. (2002). Learning outcomes: Comparing student perceptions
of skill level and importance. Journal of Marketing Education, 24(3),
203-217.
Fontenot, G., Haarhues, M. & Hoffman, L. (1991). The benefits
of the SBI program: Perceptions of former students. Journal of Small
Business Strategy, 2(1), 56-71.
Hackman, J.R. & Lawler, E.E. (1971). Employee reactions to job
characteristics. Journal of Applied Psychology Monograph, 55, 259-286.
Hackman, J.R. & Oldham, G.R. (1975). Development of the job
diagnostics survey. Journal of Applied Psychology, 60, 159-170.
Hackman, J.R. & Oldham, G.R. (1976). Motivation through the
design of work: Test of a theory. Organizational Behavior and Human
Performance, 16, 250-279. Hackman, J.R. & Oldham, G.R. (1980). Work
Redesign. Reading, MA: Addison-Wesley.
Hoffman, L., Fontenot, G. & Viswanathan, R. (1990). An
exploratory evaluation of the effectiveness of the SBI program as
perceived by quantitative and non-quantitative majors. Proceedings of
the 1990 SBIDA National Conference, 80-85.
Kent, T.W. & Davis, T.J. (2002). Using retranslation to develop
operationally anchored scales to assess the motivational context of
jobs. International Journal of Management, 19(1), 10.
Lawrence, E. (1990). Learning portfolio management by experience:
University student investment funds. The Financial Review, 25, 165+.
Luthans, F. (1992). Organizational Behavior. New York: McGraw-Hill,
Inc.
Marchese, M.C. (1998). Some factors affecting the relationship
between job characteristics and job worth: A job-role interpretation.
Journal of Organizational Analysis, 6(4), 355-369.
Michlitsch, J.F. & Sidle, M.W. (2002). Assessing student
learning outcomes: A comparative study of techniques used in business
school disciplines. Journal of Education for Business, 77(3), 125-130.
Mirchandani, D., Lynch, R. & Hamilton, D. (2001). Using the ETS major field test in business: Implications for assessment. Journal of
Education for Business, 77(1), 51-57.
Oldham, G.R., Hackman, J.R. & Stepins, L.P. (1978). Norms for
the job diagnostic survey. Technical Report No. 16, School of
Organization and Management, Yale University, 1-45.
Palomba, C.A. & Banta, T.W. (2001). Assessing Student
Competence in Accredited Disciplines. Sterling, VA: Stylus Publishing,
Inc.
Palomba, C.A. & Banta, T.W. (1999). Assessment Essentials. San
Francisco: Jossey-Bass Publishers.
Riggio, R.E., Mayes, B.T. & Schleicher, D.J. (2003). Using
assessment center methods for measuring undergraduate business student
outcomes. Journal of Management Inquiry, 12(1), 68-78.
Rungtusanatham, M. & Anderson, J.C. (1996). A clarification on
conceptual and methodological issues related to the job characteristics
model. Journal of Operations Management, 14(4), 357-367.
Serva, M.A. & Fuller, M.A. (2004). Aligning what we do and what
we measure in business schools: Incorporating active learning and
effective media use in the assessment of instruction. Journal of
Management Education, 28(1), 19.
Spicer, D.P. (2004). The impact of approaches to learning and
cognition on academic performance in business and management. Education
and Training, 46(4/5), 194.
Tiegs, R.B., Tetrick, 1.E. & Yitzhak, F. (1992). Growth need
strength and context satisfactions as moderators of the relations of the
job characteristics model. Journal of Management, 18, 575-593.
Watts. L.R. & Hudnall, J. (1991). Applying job characteristics
theory to course design. In G. James (Ed.), Proceedings of the Mountain
Plains Management Conference. Fort Collins, CO: Colorado State
University.
Watts, L.R. & Jackson, W.T. (1994). The effect of course
redesign on SBI student outcomes: An application of job characteristics
model. Journal of Business & Entrepreneurship, 6(1), 89-100.
Watts, L.R., Jackson, W.T. & Box, T.M. (1995). Student related
outcomes and task design characteristics. Texas Business Education
Association Journal, 5(1), 127-141.
Watts, L.R. & Jackson, W.T. (1995). The SBI program and student
outcomes: A study of business policy classes. Journal of Small Business
Strategy, 6(1), 93-103.
William T. Jackson, Dalton State College
Mary Jo Jackson, Dalton State College
Corbett F. Gaulden, Jr., University of Texas of the Permian Basin
Table 1: Job Characteristics Compared to Course Characteristics
COURSE
VARIABLE GENERAL DESCRIPTION EQUIVALENCY
SKILL VARIETY Usage of a wide Usage of a wide
variety of skills variety of skills
TASK IDENTITY Task closure is Assignments tie
evident together course
concepts in a clear
manner
TASK SIGNIFICANCE Outcomes are important Assignments are
important
AUTONOMY Individuals have Students have impact
impact and are able on course outcome
to make a difference
FEEDBACK FROM JOB Job results are Grades are provided in
evident a timely fashion
FEEDBACK FROM AGENT Supervisor provides Instructor provides
result information result information
independent of
grades
MEANINGFULNESS Work is meaningful Course is meaningful
RESPONSIBILITY Responsible for work Responsible for course
outcomes outcomes
KNOWLEDGE OF RESULTS Final outcomes are Final grades are known
known
GENERAL SATISFACTION Overall satisfaction Overall satisfaction
with job with course
INTERNAL WORK Job is stimulating and Course is stimulating
MOTIVATION challenging and challenging
MOTIVATING POTENTIAL (sk. var. + task id. + (sk. var. + task id. +
STRENGTH (MPS) * task sign.)(autonomy) task sign.)(autonomy)
(job feedback) (job feedback)
Table 2: Course Component Means and Standard Deviations
MEAN S.D.
Skill variety 5.32 1.25
Task identity 5.30 1.37
Task significance 4.66 1.24
Autonomy 4.48 .86
Feedback from job 4.96 1.37
Feedback from agents 5.05 1.45
Table 3 Means and Standard Deviations of Psychological States
Experienced meaningfulness 5.47 1.40
Experienced responsibility 5.66 1.23
Knowledge of results 5.13 1.50
Table 4 Student Outcomes Means and Standard Deviations
General satisfaction 5.48 1.54
Internal motivation 5.01 1.01
FIGURE 1: JOB CHARACTERISTIC THEORY
CORE JOB [right CRITICAL [right PERSONAL AND
DIMENSIONS arrow] PSYCHOLOGICAL arrow] WORK
OUTCOMES STATES
SKILL [right EXPERIENCED [right HIGH INTERNAL
VARIETY arrow] MEANINGFULNESS arrow] WORK
TASK MOTIVATION
IDENTITY
AUTONOMY [right EXPERIENCED [right HIGH QUALITY
arrow] RESPONSIBILITY arrow] WORK
FOR OUTCOMES PERFORMANCE
OF THE WORK
FEEDBACK [right KNOWLEDGE OF [right HIGH
arrow] THE ACTUAL arrow] SATISFACTION
RESULTS OF THE WITH THE WORK
WORK
ACTIVITIES
LOW TURNOVER
AND
ABSENTEEISM
[up arrow] GROWTH NEED [up arrow]
STRENGTH