Satisfaction with clinical training in Christian psychology doctoral programs: survey findings and implications.
McMinn, Mark R. ; Bearse, Jennifer L. ; Heyne, Laura K. 等
The work of clinical psychologists covers a wide variety of tasks
ranging from research and program development to assessment,
supervision, and consultation. Likewise, training in clinical psychology
is diverse, with some training models emphasizing science more than
others (Cherry, Messenger, & Jacoby, 2000). But the common
denominator that runs through all training models and most of the work
that clinical psychologists perform is clinical work--assessing and
treating clients and patients. This is what distinguishes clinical
psychology from other specialty areas in psychology. As such, most
programs in clinical psychology emphasize clinical training a great
deal, and typically hire a Director of Clinical Training to coordinate
and develop clinical training efforts. Integrative doctoral programs
provide general training in psychology as well as clinical training
while also educating students in religious and spiritual issues,
especially those pertaining to Christianity (Johnson & McMinn,
2003).
Clinical training typically involves placements in community
settings during the first three to four years of training, and then a
full time internship during the final year of training. In integrative
doctoral programs, these clinical training placements sometimes--but not
always--are done in the context of faith-affirming agencies where
religious and spiritual issues can be considered. Clinical training
opportunities vary among programs in terms of variety and availability
of practicum site selection, supervision, consultation, theoretical
orientation, opportunities for faith integration, flexibility of
scheduling, access to varying populations and pathologies, and so on.
Whereas classroom training is an integral part of preparing to become a
clinical psychologist, work done in these clinical settings is vital in
preparing doctoral candidates for the face-to-face unscripted
experiences of clinical practice.
How satisfying are these training placements? Are students who
complete their programs prepared and qualified to enter the field of
clinical psychotherapy? And how do we determine this, in light of the
lack of standardized measures to evaluate training and competence? These
questions are difficult to answer due to the limited methods that can be
used to obtain the information. Typical strategies that utilize control
groups and random assignment to differing training conditions are not
available when the subjects are engaged in lengthy and expensive
doctoral training programs. Still, one can do program evaluation by
asking students, alumni, and faculty for their candid impressions of
clinical training at their institution. Though this methodology may be
somewhat subjective and influenced by loyalty, it has nevertheless been
used in the past to evaluate the effectiveness of research training in
Christian psychology doctoral programs (McMinn, Hill, & Griffin,
2004).
Thus, the purpose of the present study was to assess the
satisfaction of students within explicitly Christian doctoral programs
regarding their clinical training. We surveyed faculty, current
students, and alumni.
Method
Procedures
In September 2010, program directors at each of the explicitly
Christian doctoral training programs in clinical psychology were invited
to participate in a survey research project designed to assess the
quality of their research training. Programs invited included Azusa
Pacific University, Fuller Theological Seminary, George Fox University,
The Institute for Psychological Sciences, Regent University, Rosemead
School of Psychology (Biola University), Seattle Pacific University, and
Wheaton College. Seven of the 8 schools invited elected to participate
by sending an email invitation to current students, faculty, and a
representative group of alumni. All data were collected in September
through November of 2010.
Each of the participating doctoral programs was provided the data
for their respective program for purposes of self-study, but as a
research team we kept only the aggregate data file. The point of this
program evaluation is not to compare one program with another, but to
provide an overall sense of satisfaction regarding clinical training in
integrative doctoral programs.
Participants
In all, we received 228 completed questionnaires from current
students, 128 from alumni, and 34 from faculty, resulting in a total of
390 respondents. Among student respondents, 38 were first-year students,
37 second-year, 47 third-year, 51 fourth-year, and 55 fifth-year.
Because we did not have access to the mailing lists from the schools, we
cannot compute an overall response rate. When McMinn et al. (2004) did a
similar study with research training among integrative doctoral
programs, they estimated a response rate of 62% for students, 51% for
alumni and 62% from faculty. The overall number of respondents in the
present study is approximately 10% lower than in the McMinn et al.
(2004) study, so it is likely that response rates hovered around 50%.
But this must be considered a rough estimate, as precise information
regarding the number of people invited to complete the questionnaire is
not available.
The average age was 46.7 years for faculty (standard deviation of
11.8), 28.2 years for students (standard deviation of 5.7), and 44.0
years for alumni (standard deviation of 12.2). The majority of
respondents (73.4%) were European-American, with other ethnicities being
represented in small proportions (2.0% African-American, 5.6%
Asian-American, 3.6% Latino, 0.8% Native American, 4.3% international,
and 6.4% other).
Instrument
In addition to basic demographic information, respondents were
asked to rate 20 items pertaining to the quality of clinical training at
their institution on a 5-point Likert scale, ranging from 1 (Very
Dissatisfied) to 5 (Very Satisified). Finally, respondents were asked
two open-ended questions regarding the strengths and weaknesses of
clinical training in their program.
Results
Satisfaction Ratings
Table 1 summarizes the ratings on the 20 satisfaction items. The
items are listed in order of the overall satisfaction ratings, with the
highest rated items at the top of the list. We evaluated items for
differences, both within-groups and between-groups.
We found overall differences among the 20 items, Wilks'
[lambda] (19,327) = .313, p < .001, which justified profile analyses
using paired-sample t-tests to determine which items were significantly
lower than the preceding item on a rank-ordered list, using a
conservative [alpha] of .01 to control for Type I error. Results of the
profile analysis are reported in Table 1.
With the between-group analysis we tested for group differences
among student, faculty, and alumni ratings, again using a conservative
[alpha] of .01 to control for Type I error. Group differences were
observed on 12 of the 20 satisfaction items. On these 12 items we then
used post-hoc Scheffe tests to identify which groups differed from one
another, using a standard [alpha] of .05 because the Scheffe post-hoc
test is already quite conservative. Among the items with group
differences, faculty and/or alumni reported more favorable opinions than
students.
An overall composite satisfaction rating was computed as the mean
of all 20 items. An overall group difference was present, F(2, 385) =
11.6, p < .01, with both faculty and alumni reporting greater
satisfaction than students.
Table 2 shows the current findings regarding clinical training in
relation to overall satisfaction ratings for research training at
integrative doctoral programs, as reported by McMinn et al. (2004).
Present students report higher overall satisfaction with clinical
training than students reported for research training in 2004, t(505) =
5.7, p< .01, Cohen's d effect size = .50. Similarly, alumni in
the present study rated clinical training more favorable than alumni
rated research training in 2004, t(224) = 10.1, p < .01, Cohen's
d effect size = 1.3. Faculty also reported higher clinical training
satisfaction than what faculty reported for research training in 2004,
t(83) = 3.6, p< .01, Cohen's d effect size = .82.
Factor Analysis
We conducted a factor analysis of the 20 items, using principal
component analysis and orthogonal (varimax) rotation and a standard
Eigen value of 1.0. Three factors emerged as significant, with 19 of the
20 items loading on only one scale with a factor loading of .5 or higher
(see Table 3). We identified these factors as Professional Development,
Clinical Placements, and Support and Supervision. One item ("The
connection between research and clinical training at practicum
sites") was omitted because it loaded both on the Professional
Development (.60) and Support and Supervision (.55) factors.
By treating the factors as subscales, we then computed mean ratings
on each of the three factors. An overall difference was observed among
the three factors, Wilks' [lambda] (2, 382) = .934, p < .001,
justifying profile analysis. The Support and Supervision factor was
rated most highly (Mean = 3.95, sd = 0.85), which was significantly
higher than the Clinical Placements factor (Mean = 3.80, sd = 0.77),
t(384) = 4.2, p < .01. The Clinical Placements factor was, in turn,
significantly higher than the Professional Development factor (Mean =
3.74, sd = 0.73), t(383) = 2.0, p < .05.
We also looked for group difference among faculty, student, and
alumni on the three factors. No group differences were observed for the
Support and Supervision factor. The Clinical Placements factor showed
overall group differences, F(2, 382) = 23.7, p < .01. Post-hoc
Scheffe tests revealed that both faculty and alumni rated this factor
higher than students. Similarly, the Professional Development factor
showed overall group differences, F(2, 382) = 8.8, p < .01. Scheffe
tests showed that alumni rated this factor higher than students.
Strengths and Areas for Enhancement
In addition to the satisfaction rating items, we asked participants
to identify one or two strengths of the clinical training in their
doctoral program. We also asked them how clinical training could be
enhanced in their program. Both were open-ended qualitative items.
Though the main purpose of these questions was for self-study purposes
for individual programs, we analyzed the themes from the overall data
set. After an initial training session, one author, using grounded
theory methods, rated results of each qualitative item.
As with the program evaluation of research training reported by
McMinn et al. (2004), the most prominent strength identified pertained
to student-faculty relationships. Respondents wrote comments such as:
"The professors are continually making an effort to improve our
clinical training", "The relationship that the faculty has
with their students is open and safe", and "The faculty truly
care for the students." Approximately 40% of the 315 comments
offered regarding strengths of training pertained to faculty-student
relationships. Other themes included instructional resources,
integration of psychology and Christianity, diversity of training
experiences, relationships with clinical supervisors and other students,
and learning theoretical perspectives in clinical psychology.
When asked about areas of enhancement, the primary theme identified
among the 282 comments offered pertained to instruction in the classroom
and in clinical training sites. Respondents wrote comments such as:
"More exposure to and observation of faculty clinical work in order
to learn from their experiences and facilitate our ability to develop a
theoretical orientation", "More integration experiences in
clinical training sites are needed", and "Deeper theological
and theoretical training." Approximately one-fourth of the comments
pertained to instructional enhancements. Other themes included concerns
with supervision, the need for better coordination of practicum
placements, suggestions for enhancing the breadth of training, the need
for more time with faculty, and increasing diversity in training sites.
Discussion
The overall satisfaction with clinical training in integrative
doctoral programs in clinical psychology appears to be strong. Ratings
hover near the top end of the 5-point Likert scale used for satisfaction
ratings, and they are consistently higher than the research training
ratings from integrative doctoral programs reported by McMinn and
colleagues in 2004. The difference in alumni ratings between clinical
and research training had an enormous effect size of 1.3.
As with the research ratings reported in 2004, the clinical
training ratings reported here are generally lower for students than for
faculty. This seems reasonable given both the stress that students face
in balancing all the responsibilities of doctoral studies and the
relatively higher degree of investment that faculty have in the quality
and reputation of training. This is not to say that faculty are more
objective--they may or may not be--but merely that faculty have a longer
term commitment to an institution than students do.
Somewhat surprisingly, alumni ratings were higher than students.
With research training (McMinn et al., 2004), alumni ratings were lower
than both students and faculty. Alumni ratings provide a unique vantage
point of reflection over time. After students graduate they are able to
compare their preparation with other colleagues in the field, and that
likely influences their views of the training they received. These
reflections apparently produce enhanced opinions about clinical
training, relative to current student views, and diminished views of
research training.
In sum, it seems not too far a stretch to say that integrative
doctoral programs in clinical psychology are doing a somewhat better job
in clinical training than in research training. This conclusion must be
viewed cautiously, of course, both because of the limitations inherent
in survey research and because both this study and the previous one
(McMinn et al., 2004) offer little more than satisfaction ratings from
constituents of the institutions being studied. Also, it is possible
that integrative doctoral programs have changed between 2003 and 2010.
Perhaps research training would be rated more highly now than was the
case in the 2004 report.
It is also telling that student-faculty relationships were the
primary strength identified in the open-ended question about program
strengths. This was also the case in the McMinn et al. (2004) study,
suggesting this is an important and perhaps distinguishing feature of
integrative doctoral programs. That is, students, alumni, and faculty
are enthused about the sort of working relationships that develop in
these programs, and they are quick to identify these collaborative
relationships as strengths of their programs. These positive,
collaborative relationships are not limited to faculty and students, as
students appear to be quite enthused about their relationships with
clinical site supervisors as well. The Supervision and Support factor
was the highest rated of the three factors emerging from our factor
analysis.
Rightly, doctoral programs always look for ways to enhance
training. Results from the present study indicate that the most useful
domain on which to focus these efforts is the integration of on-campus
instruction with clinical placements. The qualitative data revealed
various suggestions for enhancing instruction on campus. Similarly, the
Professional Development factor was the lowest rated of the three
factors in the factor analysis. Programs might focus especially on
building stronger connections between research training and clinical
training, and providing more guidance for students as they develop a
theoretical orientation. That said, it should also be noted that these
are not glaring weaknesses. Even the lowest rated satisfaction item was
slightly above the midpoint on the 5-point satisfaction scale.
Various limitations to this research should be noted. First, survey
research always carries the risk of response bias. Those who responded
may vary in systematic ways from those choosing not to respond. This is
complicated by the difficulty in assessing response rates. Second, our
agreement with the various doctoral programs studied was not to compare
programs with one another, so we end up drawing general conclusions
about integrative doctoral programs that may not be true for any
individual program. We are pleased that the seven doctoral programs
involved in this study have also provided an article describing their
clinical training that appears in this special issue. These narratives
describe the distinct approaches of each program. Third, satisfaction
studies such as this, like effectiveness studies in general, are not
well controlled. We have no control group to help us interpret what an
acceptable or typical rating might be on the 5-point scale we used.
Though it is helpful to compare these findings with the research
satisfaction data reported by McMinn et al. in 2004, even that is not a
pristine comparison because of the time intervening between the two
studies and the slight differences in the scales used.
In conclusion, students, faculty, and alumni of integrative
doctoral programs in clinical psychology report a positive experience in
clinical training. Programs are providing support for students who are,
in turn, generally pleased with the training they receive at their
placement sites. While all areas are favorably rated, it appears that
instructional support is not perceived to be quite as strong as
relational support, and that opinions about clinical training, like good
wine, become more favorable over time.
References
Cherry, D. K., Messenger, L. C., & Jacoby, A. M. (2000). An
examination of training model outcomes in clinical psychology programs.
Professional Psychology: Research and Practice, 31, 562-568.
Johnson, W. B., & McMinn, M. R. (2003). Thirty years of
integrative doctoral traning: Historic developments, assessment of
outcomes, and recommendations for the future. Journal of Psychology and
Theology, 31, 83-96.
McMinn, M. R., Hill, P. C., & Griffin, J. W. (2004).
Satisfaction with research training in Christian psychology doctoral
programs: Survey findings and implications. Journal of Psychology and
Christianity, 23, 305-312.
Mark R. McMinn
Jennifer L. Bearse
Laura K. Heyne
Ryan C. Staley
George Fox University
Authors
Mark R. McMinn (Ph.D. in Clinical Psychology, Vanderbilt
University) is Professor of Psychology and Director of Faith Integration
in the Graduate Department of Clinical Psychology at George Fox
University. His research interests include the integration of psychology
and Christianity, clergy health, and technology in clinical practice.
Jennifer L. Bearse (M.A. in Clinical Psychology, George Fox
University) is currently working on her doctorate of clinical psychology
at George Fox University. Her area of interest is the assessment and
treatment of adolescents.
Laura K. Heyne, (M.A. in Clinical Psychology, George Fox
University) is doctoral student in the Clinical Psychology Department at
George Fox University (OR). Her interests include health psychology,
faith perspectives among doctorate students and program evaluation.
Ryan C. Staley (MA. in Clinical Psychology, George Fox University;
M.S. in counseling psychology from the University of Kansas) is
currently a doctoral student in the Department of Clinical Psychology at
George Fox University. His primary research interests include clergy
mental health, psychologist and clergy collaboration, and treatment
issues in child and adolescent populations.
Correspondence regarding this article should be sent to Mark R.
McMinn, Ph.D., Graduate Department of Clinical Psychology, George Fox
University, 414 N. Meridian St., #V104, Newberg, OR 97132.
Table 1
Satisfaction Regarding Clinical Training
Overall Faculty Student
Support provided by doctoral 4.2 4.3 4.1
faculty when students have
questions about clinical training
The clinical training students 4.1 4.3 3.9
receive in their coursework *
The type of practicum sites 4.1 4.4 3.9
available
Learning how to integrate 4.0 4.0 3.8
psychology and Christianity
in clinical work
The clinical training students 4.0 4.2 3.9
receive at practicum sites
Support provided by site 4.0 4.0 4.0
supervisors when students have
questions about clinical training
The variety of practicum sites 4.0 4.4 3.8
available to students
The quantity of supervision 3.9 3.7 3.8
students receive at practicum sties
The feedback and evaluation 3.9 4.2 3.8
students receive on clinical work
How students are matched with 3.9 4.2 3.6
practicum sites
Preparation for students' 3.9 4.6 3.5
internship placement
The quality of supervision students 3.8 3.9 3.8
receive at practicum sites
Faculty oversight of practicum 3.7 4.1 3.5
training *
Preparation for students' first 3.7 4.3 3.5
practicum placement
The connection between research and 3.6 3.6 3.6
clinical training in coursework
How doctoral faculty help faculty 3.6 3.9 3.3
students develop a theoretical
orientation
Communication between the practicum 3.5 4.0 3.3
sites and the doctoral program
The amount of direct observation of 3.4 3.3 3.3
students' clinical work
The connection between research and 3.2 3.2 3.2
clinical training at practicum
sites *
How site supervisors help students 3.2 3.5 3.1
develop a theoretical orientation
Average rating across 3.8 4.1 3.6
20 satisfaction items
Alumni Group Diff
Support provided by doctoral 4.3
faculty when students have
questions about clinical training
The clinical training students 4.4 F>S
receive in their coursework *
The type of practicum sites 4.2 A,F>S
available
Learning how to integrate 4.3 A>S
psychology and Christianity
in clinical work
The clinical training students 4.1
receive at practicum sites
Support provided by site 4.1
supervisors when students have
questions about clinical training
The variety of practicum sites 4.3 A,F>S
available to students
The quantity of supervision 4.1
students receive at practicum sties
The feedback and evaluation 4.0 F>S
students receive on clinical work
How students are matched with 4.2 A,F>S
practicum sites
Preparation for students' 4.2 F>A>S
internship placement
The quality of supervision students 3.9
receive at practicum sites
Faculty oversight of practicum 3.9 A,F>S
training *
Preparation for students' first 3.9 A,F>S
practicum placement
The connection between research and 3.6
clinical training in coursework
How doctoral faculty help faculty 3.9 A,F>S
students develop a theoretical
orientation
Communication between the practicum 3.7 A,F>S
sites and the doctoral program
The amount of direct observation of 3.7
students' clinical work
The connection between research and 3.2
clinical training at practicum
sites *
How site supervisors help students 3.3
develop a theoretical orientation
Average rating across 4.0 A,F>S
20 satisfaction items
Notes. All items were rated on a 5-point Likert scale,
ranging from 1 ("Very Unsatisfied") to 5 ("Very Satisfied").
Items are arranged in descending order based on overall
satisfaction ratings. * indicates items rated significantly
lower than the preceding item in the Overall rating (p < .01).
Group Diff refers to group differences that were found for
particular items, where F = faculty, S = students,
and A = alumni.
Table 2
Satisfaction with Clinical and Research Training
at Integrative Programs
Area of Evaluation Student Faculty Alumni
Clinical Training 3.6 (0.6) 4.0 (0.6) 4.0 (0.7)
N = 225 N= 34 N= 128
Research Training 3.3 (0.8) 3.5 (0.7) 3.0 (0.7)
N= 282 N= 51 N=98
Note. Research Training results are from McMinn et al. (2004).
The rating numbers are on a 5-point scale, with 5 being the most
favorable rating and 1 being the least favorable. In the present
study (clinical training) item ratings ranged from 1
("Very Unsatisfied") to 5 ("Very Satisfied"). In the research
training study item ratings ranged from 1 ("Very Poor") to
5 ("Very Strong").
Table 3
Factor Structure of Satisfaction Items
Factor
Factors and Satisfaction Items Loading
Factor 1: Professional Development
The connection between research and clinical training .78
in coursework
How doctoral faculty help students develop a theoretical .69
orientation
The clinical training students receive in their coursework .67
The amount of direct observation of students' clinical work .62
The feedback and evaluation students receive on clinical work .59
How site supervisors help students develop a theoretical .58
orientation
Support provided by doctoral faculty when students have .55
questions about clinical training
Learning how to integrate psychology and Christianity .51
in clinical work
Factor 2: Clinical Placements
The variety of practicum sites available to students .79
The type of practicum sites available .75
How students are matched with practicum sites .67
Preparation for students' internship placement .63
Preparation for students' first practicum placement .57
Faculty oversight of practicum training .51
Communication between the practicum sites and the doctoral .50
program
Factor 3: Support and Supervision
The quantity of supervision students receive at .81
practicum sties
The quality of supervision students receive .81
at practicum sites
Support provided by site supervisors when students have .80
questions about clinical training
The clinical training students receive at practicum sites .75
Notes. The factor analysis was conducted with principal
components analysis using an Eigen value of 1.0 and varimax
rotation. Only items with a factor loading of 0.5 or higher
on one and only one scale are listed here.