首页    期刊浏览 2025年07月10日 星期四
登录注册

文章基本信息

  • 标题:The effect of teacher communication and course content on student satisfaction and effectiveness.
  • 作者:Parayitam, Satyanarayana ; Desai, Kiran ; Phelps, Lonnie D.
  • 期刊名称:Academy of Educational Leadership Journal
  • 印刷版ISSN:1095-6328
  • 出版年度:2007
  • 期号:September
  • 语种:English
  • 出版社:The DreamCatchers Group, LLC
  • 摘要:The effectiveness of student evaluation of faculty (SEF) has received increasing attention from the academics. In light of this, the present study advances the contributions to the literature in two ways: a) it provides a conceptualization of the constructs in a SEF instrument by discriminant and convergent validity and reliability of the measures, and (b) simultaneous analysis of instructorwise and class-level analysis of correlates of evaluations. This study examined the relative influence of class-level and individual-level perceptions of communication, course content, fairness in grading exams on perceived satisfaction and effectiveness of instructors. Based on the SEF instrument used in a university in south, the present study provides some interesting insights. The results from the analysis of 4186 students and 35 instructors indicated that students' perceptions of teacher's communication skills in the class room and course content set by the instructor were positively related to both effectiveness and satisfaction. The results also suggest that the exams (i.e. perceived fairness of the instructor's grading procedures) moderate the relationships between communication, course content and student satisfaction with instructor's teaching and students' perception of teacher effectiveness. The results of hierarchical regression results show validity of the instrument used by the students in evaluating the teacher effectiveness. The results support the view that student evaluation instruments need to be taken seriously rather than a mere ritual.
  • 关键词:Student evaluation of teachers;Teacher-student relations;Teacher-student relationships

The effect of teacher communication and course content on student satisfaction and effectiveness.


Parayitam, Satyanarayana ; Desai, Kiran ; Phelps, Lonnie D. 等


ABSTRACT

The effectiveness of student evaluation of faculty (SEF) has received increasing attention from the academics. In light of this, the present study advances the contributions to the literature in two ways: a) it provides a conceptualization of the constructs in a SEF instrument by discriminant and convergent validity and reliability of the measures, and (b) simultaneous analysis of instructorwise and class-level analysis of correlates of evaluations. This study examined the relative influence of class-level and individual-level perceptions of communication, course content, fairness in grading exams on perceived satisfaction and effectiveness of instructors. Based on the SEF instrument used in a university in south, the present study provides some interesting insights. The results from the analysis of 4186 students and 35 instructors indicated that students' perceptions of teacher's communication skills in the class room and course content set by the instructor were positively related to both effectiveness and satisfaction. The results also suggest that the exams (i.e. perceived fairness of the instructor's grading procedures) moderate the relationships between communication, course content and student satisfaction with instructor's teaching and students' perception of teacher effectiveness. The results of hierarchical regression results show validity of the instrument used by the students in evaluating the teacher effectiveness. The results support the view that student evaluation instruments need to be taken seriously rather than a mere ritual.

Key Words: Student evaluations, Teacher effectiveness, Perceptions of grading

INTRODUCTION

It is a widely accepted practice for all the American Colleges and universities to use student evaluation of faculty (SEF) to measure instructional effectiveness of teachers. Literature review on SEF reveal both positive and negative side of the evaluation. On the positive side, academicians argue that the SEF are highly reliable, moderately valid, and assist teachers in improving the methods of instruction subsequently. Available empirical evidence suggests that students ratings can lead to changes in course delivery and thus more favorable student evaluations (McKeachie, 1996). Meta analysis and review articles conclude that students ratings are acceptably reliable and valid indicators of teaching effectiveness that can lead to modest improvements in teaching (Braskamp & Ory, 1994). Personality is correlated to instructor's class room behavior and educational goals which in turn are related to teaching effectiveness.

On the other hand, critics argue that (i) SEF are biased in that students tend to give higher ratings when they expect higher grades in the course (called grading leniency bias), (ii) SEF encourage teachers to dumb down courses to keep students happy at all costs, (iii) SEF ratings are often influenced by the cosmetic factors that have no effect on student learning, and (iv) SEF are a threat to academic freedom in the sense teachers may feel inhibited from discussing controversial ideas and presenting challenging questions to students because they fear that students may express disagreement through the SEF (Braskamp & Ory, 1994).

Critics also argue that 'why teacher effectiveness is defined in terms of 'student satisfaction' and 'why are faculty so willing to trust judgments made by students in areas beyond their competence to judge?' (Gray & Bergmann, 2005). Some scholars suggest that (a) do not use student ratings as the only measure of teaching effectiveness as they do not provide evidence in all areas relevant to teacher effectiveness (e.g., command of subject matter, appropriateness of course content and objectives). Perhaps some other useful sources to assess teacher effectiveness are instructor's teaching portfolio and student's actual achievements; (b) make the SEF be 'achievement' oriented rather than 'satisfaction' oriented. This can be done by adding questions such as how much the students learned from the course and by removing the questions such as how well the instructors know the subjects they taught because students may not have adequate knowledge to judge knowledge of teacher.; (c) while making judgments about individual instructor it is better to use ratings to similar courses (e.g., comparing business course with music course may appear to comparing apples with oranges) (Emery, Kramer, & Tian, 2003).

Despite these critical arguments, SEF continue to be important and frequently contentious research area (Harrison, Douglas, & Burdsal, 2004). Literature so far focused on (a) validity of teaching evaluation scales (Greenwald, 1997; McKeachie, 1997), (b) multidimensionality of teaching (Marsh and Roche, 1997), (c) structure of student ratings of instructional effectiveness (d'Apollonia and Abrami, 1997), (d) the effect of grading leniency on SEF (Greenwald and Gillmore, 1997), and (e) bias in student ratings (Gillmore and Greenwald, 1999; Marsh and Roche, 1999). Most of the research focused on the multilevel factor analysis of SEF and examine the factor loadings of the items on (a) instructor's delivery of course information (e.g., enthusiasm, organization, presentation, clarity) (b) teacher's role in facilitating instructor / student interactions (e.g., group interaction,; rapport understanding learners' backgrounds, ethnicities, and attitudes), and (c) instructor's role in regulating student learning (e.g., exams, assignments, readings, quizzes). In general, student ratings data is analyzed by the use of summary statistics of central tendency (e.g. mean) and variability (e.g. standard deviation) (Jensen & Artz, 2005).Researchers examine the factorial validity of the scores of the scales (Students' Evaluation of Teaching Effectiveness Rating Scale (SETERS) by both conventional confirmatory factor analysis using the total covariance and pooled within-covariance matrices, and also by multilevel factor analysis to allow simultaneous examination of the within- and between-class structures while taking account the measurement error (Toland & De Ayala, 2005). Thus, despite the negative points or objections, academic institutions continue to use SEFs because they are the only objective measure of teacher's performance, easy to administer (less expensive), and provide a basis for teacher's retention, promotion and pay raises.

Some anecdotal evidence is available to explain the relationship between teacher communication and perceived teacher effectiveness: In the famous Ceci's experiment (Stephen J. Ceci, a professor at Cornell University) an instructor taught a developmental psychology course twice--once using his customary style. The second time he made a big effort to be more exuberant--adding hand gestures and communicating with varying pitch of his voice. Though he followed the same text book, same content in both situations, his ratings were higher second time. Surprisingly, students gave higher ratings to the text book also second time. This experiment conveys a message: communication style does matter to impact the evaluation of teachers by students (Gray & Bergmann, 2005). In some studies it was found that lecture content affected student achievement but had only negligible impact on student ratings. In one of the recent studies it was found that instructor satisfaction was significantly related to the perceived fairness of the instructor's grading procedures, perceived fairness of the expected grades and fairness of the instructorstudent interactions (Wendorf & Alexander, 2005).

Yet there remains a potential gap in the area in the sense that none of the researchers dealt with how the constructs in the SEF instrument are related to each other. The review of prior research shows the relative importance of various specific instructional characteristics (normally represented by various dimensions in the measurement instrument) and research is not yet clear as to how these are related. The SEF instruments generally contain the following components: course organization and design, rapport with students, grading quality, and course value. What is missing in the research is that how these factors are related to each other is not examined by the researchers. The present research is aimed at bridging the gap and providing an extension of research. Not only the validity and reliability of the measures is important, the relationship between the constructs need a statistical examination before fully relying on the instrument. The hypothesized model is presented in Figure 1.

[FIGURE 1 OMITTED]

The terms we used are interpreted as follows:

Exams: Perceived fairness of the instructor's grading procedures

Communication: Students perception of teacher communication

Course content: Students perception of course content as instructors mention in the course outline.

Effectiveness: Perceived effectiveness of the instructor

We therefore, propose the hypothesized relationships between the constructs and express them through our hypotheses H1 through H4. We also propose that the students' perception of grading and examinations by the teachers moderates the relationships between the criterion variables and outcome variables. The hypotheses are listed thus:

H1 Students perception of teacher's communication is positively related to student satisfaction with teacher.

H2 Students perception of teacher's communication is positively related to perceived teacher effectiveness.

H3 Students perception of course content is positively related to student satisfaction with teacher.

H4 Students perception of course content is positively related to perceived teacher effectiveness.

H1a: Students' perception of exams moderate the relationship between communication and satisfaction such that greater scores on exams are associated with higher satisfaction.

H2a: Students' perception of exams moderate the relationship between communication and effectiveness such that the greater scores on exams are associated with greater effectiveness.

H3a: Students' perception of exams moderate the relationship between course content and satisfaction such that greater scores on exams are associated with higher satisfaction

H4a: Students' perception of exams moderate the relationship between course content and effectiveness such that greater scores on exams are associated with higher satisfaction.

METHODOLOGY

Data and Sample

University teaching evaluation scale was used in this study to measure students' perception of teaching effectiveness. Data was collected from both undergraduate and graduate students from southwest public university. All students voluntarily participated in this study. Data was collected from multiple courses and therefore it was possible for students to respond more than once. Since students were rating different courses there is no duplication of surveys. The normal procedure, as with any university, is that data was collected two to three weeks before the end of semester from students enrolled in classes. Students were given an opportunity to complete the information consisting of the demographics and measures of teaching effectiveness, satisfaction, and exams. Students were instructed that the purpose of evaluation is to see how satisfied they were with the course content, and instructor's teaching methods. They were also asked to write comments and suggestions if necessary to improve the teaching methods.

There were 4186 usable surveys collected from students, leaving the incomplete surveys from analysis. Measures

Communication

Communication is measured using three items. The reliability coefficient (Chronbach alpha) for communication is .87. Questions were asked to determine the overall communication effectiveness of instructor, such as, "Instructor communicated clearly and effectively". Answers to these questions provide valuable insight on the communication skills exhibited instructors in class room as perceived by students.

Exams

Examinations and testing were measured using four items. The alpha for this measure was acceptable at .86. One of the sample items read as: "The instructor discussed and answered items on returned tests and assignments".

Course Content

Course content was measured using four items. The alpha for this measure was high at .90. One of the sample items read as: "The course covered material consistent with the stated objectives".

Satisfaction and Effectiveness

These were measured using one item for each measure. It is expected that these measures tap the extent to which students were satisfied with the instructors and whether instructors are effective in achieving the student goals or not.

Data Analysis

The confirmatory factor analysis was estimated on the 12 items measuring the communication, exams, and course content. Using structural equation modeling, estimates are done by constraining each item to load on that factor for which it was a proposed indicator. The factor loadings are over .72 for all the items with an exception of one item on exams that loaded at .62. The goodness of fit measures reveal the following: [chi square] = 4579.36, 70 df'; goodness-of-fit index [GFI] =0.86; comparative fit index [CFI]=0.97; root-mean-square error of approximation [RMSEA] = 0.12; root mean square residual [RMR] = 0.036. We further tested for discriminant validity by following the procedures outlined by Fornell and Larcker (1981) and Netemeyer, Johnston, and Burton (1990), by comparing the variance extracted estimates of the measures with the square of the correlation between constructs. Variance extracted estimate is calculated by dividing the sum or squared factor loadings by the sum of the squared factor loadings plus the sum of the variance due to the random measurement error in each loading (Variance extracted = [SIGMA] [[lambda].sup.2].sub.yi] + [SIGMA] Var([[member of].sub.i])]). If the variance extracted estimates of the variables are greater than the squares of the correlations between the constructs, evidence of discriminant validity is said to exist (Fornell & Larcker, 1981). In this study, the variance extracted estimates for all the variables exceeds the suggested level of .50 (Fornell & Larcker, 1981, p.46) and also exceeds the squared correlation between the variables. The variance extracted estimates for the communication, course content, and exams were .65, .59, and .51respectively and both exceeded accepted cut off of .50. These statistics, together with the CFA results, offer support for discriminant validity between the students' perception of communication, exams, and course content. Overall, these results suggest that the factor-structure of the variables is a good fit of the data and provide discriminant validity to the measures. The results of CFA for all the variables are reported in Table 1.

The hypotheses were tested using hierarchical moderated regression analysis. All the models included control variables prior to introducing the main and interaction variables. Since multiple regression analysis involved interactions, the "main effect" terms and product terms could be highly correlated, thus raising the issue of multicollinearity and make regression coefficients unstable and difficult to interpret (Cohen & Cohen, 1983). As suggested by Aiken and West (1991), we used centered variables in analysis because interactional analysis using centering procedure yields coefficients that are relatively free of multicollinearity. We also plotted the significant interaction graphs for facilitating interpretation of the moderator effects.

RESULTS

Means, standard deviations, and zero-order correlations are reported in Table 2. Our initial analysis of descriptive statistics table suggests that communication and course content is highly correlated at .86. Kennedy (1985) suggests that correlations of .8 or higher may be problematic from the viewpoint of multicollinearity. Tsui, Ashford, Clair and Xin (1995) state that there really is no exact level of correlation that constitutes a serious multicollinearity problem and they suggest .75 as a general rule. Since the correlations between communication and satisfaction (.79), course content and satisfaction (.78) it is warranted to check for multicollinearity. We did a statistical check for multicollinearity by observing the variance inflation factor (VIF) of each independent variable. The largest VIF was less than 2; thus, there is support that multicollinearity is not a problem (Kennedy, 1985).

Multiple regression analysis was used to test the hypothesis that communication and course content are positively related to satisfaction and effectiveness. In addition, moderated hierarchical regression analysis was used to test the extent to which exams moderate the relationship between communication and satisfaction; communication and effectiveness; course content and satisfaction; and course content and effectiveness. To test the moderator hypothesis, we created linearby-linear interaction terms by multiplying the proposed moderator (exams) by the communication and course content variables (Aiken & West, 1991). After entering the main effects and control variables into the equation, the multiplicative terms were added. The regression weights for the multiplicative terms were then examined for significance. The results are presented in Table 3. The instructor-wise analysis of the results were presented in Table 4.

As shown in Column 1, communication ([beta] = .49, p <.001) and course content ([beta] =.42, p <.001) were positively related to satisfaction and the beta coefficients were statistically significant. In addition, exams were negatively related to satisfaction as hypothesized in the model ([beta] = -.09, p <.001). The main effects model explained 66.5% of variance in satisfaction (F = 1185.21, p <.001 with df 1, 4177). These findings suggest that communication and course content are strong predictors of satisfaction thus supporting H1 and H3.

Column 4 shows the direct effects of communication and course content on effectiveness. Once again, communication and course content are strong predictors of perceived teacher effectiveness and the beta coefficients respectively are ([beta] =. 42, p <.001; [beta] = .46, p <.001). Examination is negatively related to effectiveness ([beta] = -.04, p <.05). The direct effects model explained 66.1% of variance in effectiveness (F = 1162.53, p <.001 with [df.sub.1,4177]). Overall the results provide support for H2 and H4.

Hypothesis 1a is related to exams as a moderator in the relationship between communication and satisfaction. The results of moderated regression (in Column 2) do show a significant interaction between communication and exams in its effect on satisfaction. The moderated regression model yielded the beta coefficients for exam (b = -.36, p < 0.001), course content (b = .43, p < 0.001) for the interaction term (b = .69, p < 0.001). The moderated regression model was significant (F = 1103.98, p < .001 with [df.sub.1,4176]) explaining 67.9 per cent of the variance. The inclusion of interaction between communication and exams accounted for additional 1.4 percent of the variance satisfaction ([DELTA])F = 179.94, p < .001; [DELTA][R.sup.2] = .014). These results render support for H1a that exams moderated the relationship between communication and satisfaction.

Figure 1 shows the interaction plot by showing the regression lines linking the communication to satisfaction under the conditions of low and high exam scores. By high exam scores we mean that the instructors make it clear to the students about the pattern of exams, grading system, giving tests back on time etc. Low scores mean that instructors earned low scores on these items. While plotting the interaction plot, we followed procedure laid out by Aiken and West (1991) by computing the slopes from beta coefficients derived from regression equations that adjust the interaction term to reflect different values of moderator (low scores were defined as one standard deviation below the means and high scores represent one standard deviation above the mean scores). As shown from the figure, communication associated with high scores on exams yield higher satisfaction that the communication associated with low exam scores. These results provide support for H1a. The interaction plots are presented in Figure 2.

[FIGURE 2 OMITTED]

Column 3 from Table 2 represents the moderating effect of exams on the relationship between course content and satisfaction. The beta coefficients for exam (b = -.35, p < 0.001), communication (b = .49, p < 0.001) and for the interaction term ([beta] = .64, p < .001) were significant suggesting that H3a is supported. The overall model explained 67.7% of the variance and is significant (F = 1093.58, p < .001). Compared to the base model the moderated model yielded additional variance of 1.2% ([DELTA]F = 152.08, p < .001; [DELTA][R.sup.2] = .012 with [df.sub.1,4176]).

Hypotheses 2a is related to the exam as a moderator in the relationship between communication and effectiveness. Column 5 in Table 2 shows that the beta coefficients for exam ([beta] = -.23, p <.001), communication ([beta] =.09, p < .05), and for the interaction term between communication and exam ([beta] = .49, p <.001) are significant with the overall model significant (F = 1049.34, p < .001) and explaining 66.8% of the variance in effectiveness. The moderated model yielded an additional variance of .7% ([beta]F = 88.26, p < .05; [beta][R.sup.2] = .007 with df 1, 4176). The regression results of exam as a moderator in the relationship between course content and effectiveness are presented in Column 6. The results show that the beta coefficients for exam ($ = -.22, p < .001), communication ($ = .41, p<.001), course content ($ =. 17, p <.001), and for the interaction term between course content and exam ($ = .46, p <.001) are significant with the overall model significant (F=1044.91, p<.001) and explaining 66.7% of the variance in effectiveness. The moderated model yielded an additional variance of .6% ([beta]F = 76.25, p < .001; [beta][R.sup.2] = .006 with [df.sub.1,4176]).

DISCUSSION

While it is a generally accepted practice (and sometimes it is mandatory) to collect student evaluations of faculty (SEF), most of the universities tend to make it a ritual. Often, the merit decisions of faculty are partly based on the teachers' instructional effectiveness and one way to secure the measure is through the SEF instruments. Literature on educational psychology with regard to SEF is vast but is limited only to the construct validity and reliability of the instrument. One serious gap that is existing in the literature is that there is inherent assumption of underlying relationships between constructs and very rarely these relationships are tested statistically. That is to say, the relationships between the components such as course content, course description, communication and perceived teacher effectiveness as expressed by students in their evaluation forms are not examined. Instead, researchers conduct statistical tests on these constructs in terms of reliability coefficient, mean values and standard deviation. One reason why these relationships are not tested is the inherent assumption that teacher communication as perceived by students, course description as outlined by instructors, grading pattern as perceived by students will have positive effect on the perceived effectiveness of teachers.

The major objective of this article was to study the relationships among the constructs in SEF instrument in assessing the teaching effectiveness. One interesting finding (as evidenced in Table 1) is the extremely high correlations that exist between the constructs (perceived course content, communication, satisfaction and effectiveness).This study is aimed at not only testing the validity and reliability of the instrument used in SEF, but also tests the relationships between constructs composed in the instrument. The regression results support that both communication style as perceived by students and course content are positively related to both teacher satisfaction and teacher effectiveness. The moderating regression results support that perception of students with regard to the grading and exams moderated the relationships between course content, communication and teacher effectiveness. Again, grading moderated the relationship between course content and communication and student satisfaction with teachers.

These results add value to the literature in two ways. Surveying literature we noticed that the most of the studies focused on the testing the measurement of the instrument through construct validity and reliability. The studies did not, to our knowledge, study the interrelationships between the variables the instrument is measuring or purporting to measure. What will be the use of the construct validity if we cannot find meaningful relationships between the study variables? Our study aims to bridge the gap in the literature and focus on the new dimension of studying the relationships between the variables, in addition to providing the validity and reliability of the measures the SEF instrument is measuring. The results add to the literature in that future research is needed that looks at how the perceived course content, workload and communication are related to perceived satisfaction of students and teacher effectiveness at different types of universities. It would be useful to determine if these results obtained in this study generalize to universities in different classification according to the 2000 Carnegie classification.

Future research also is needed to see the relationship between the students' preferences and instructor's teaching methods and the teacher effectiveness, using the experimental methods. Also whether there exist any differences in the perceptions based on ethnic background, age, and gender. One of the recent studies shows that reliable differences exist between instructors and these differences may be strongly tied to the disparity in the instructor fairness, it is suggested that class is the appropriate unit of analysis (use the class means rather than individual ratings) (Wendrof and Alexander, 2005). This again give rise to the levels of analysis. Future researchers need to take into account these suggestions while evaluating the teaching effectiveness using SEF.

Overall, research on college teaching using the SEF offer clear avenue for future research. Our results, though first of its kind in analyzing SEF in a totally different dimension, is expected to enrich the understanding and analysis of student evaluations for academicians and administrators.

REFERENCES

Aiken, L., & West, S. (1991). Multiple regression: Testing and interpreting interactions. Newbury Park, CA: Sage

Braskamp, L.A., & Ory, J.C. 1994. Assessing faculty work. San Francisco, CA: Jossey-Bass Publishers.

d'Apollonia, S., & Abrami, P.C. 1997. Navigating student ratings of instruction. Journal of Educational Psychology 52(11): 1198-1208.

Emery C., Kramer, T., & Tian, R. 2003. Return to academic standards: Challenge the student evaluations of teaching effectiveness. Quality Assurance in Education, 11(1): 37-46.

Fornell, C., & Larcker, D.F. (1981).Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18, 39-50.

Harrison, P.D., Douglas, D.K., & Burdsal, C.A. 2003. The relative merits of different types of overall evaluations of teaching effectiveness. Research on Higher Education, 45(3): 311- 323.

Jensen J.B., & Artz N. 2005. Using quality management tools to enhance feedback from student evaluations, Decision Sciences Journal of Innovative Education, 3(1): 47-72.

Gillmore, G.M., & Greenwald, A.G. 1999. Using statistical adjustment to reduce bias in student ratings. American Psychologist 54(7): 518-519.

Greenwald, A.G.1997. Validity concerns and usefulness of student ratings of instruction. Journal of Educational Psychology, 52(11): 1182-1186.

Greenwald, A.G., & Gillmore, G.M.1997. Grading leniency is a removable contaminant of student ratings. Journal of Educational Psychology, 52(11): 1209-1217.

Gray, M., & Bergmann, B.R. 2003. Student teaching evaluations: Inaccurate, demeaning, misused, Academe, 5: 4446. http://www.aaup.org/publications/Academe/2003/03so/03sogray.htm

Kennedy, P. 1985. A guide to econometrics (2nd ed.). Cambridge Mass.: MIT Press.

Marsh, H.W., & Roche, L.A. 1997. Making students' evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. Journal of Educational Psychology 52(11): 1187-1197.

Marsh, H.W., & Roche, L.A. 1999. Rely upon SET research. American Psychologist 54(7): 517-518.

McKeachie, W.J. 1996. Student ratings of teaching. American Council of Learned Societies Occasional Paper No. 33, Washington DC. http://www.acls.org/op33.htm#McKeachie

McKeachie, W.J. 1997. Student ratings: The validity of use. Journal of Educational Psychology 52(11): 1218-1225.

Netemeyer, R.G., Johnston, M.W., & Burton. S. (1990). Analysis of role conflict and role ambiguity in a structural equation framework. Journal of Applied Psychology, 75, 148-157.

Toland, M.D., & De Ayala, R.J. 2005. A multilevel factor analysis of students' evaluation of teaching, Educational and Psychological Measurement, 65(2): 272-296.

Tsui, A., Ashford, S., Clair, L., & Xin, K. 1995. Dealing with discrepant expectations: Response strategies and managerial effectiveness. Academy of Management Journal, 38: 1515-1543.

Wendorf, C.A., & Alexander, S. 2005.The influence of individual- and class-level fairness-related perceptions on student satisfaction. Contemporary Educational Psychology, 30(2):190-206.

Satyanarayana Parayitam, McNeese State University

Kiran Desai, McNeese State University

Lonnie D. Phelps, McNeese State University
Table 1: Results of Confirmatory Factor Analysis and Measurement
Properties

Variable Alpha Standardized Reliability
 Loadings ([[lambda]
 ([[lambda] .sup.2.
 .sub.yi] sub.yi]

Communication (Factor 1) 0.87

The instructor 0.92 0.85
communicated clearly
and effectively
The instructor was 0.75 0.56
willing to provide
extra help as needed
The instructor 0.74 0.55
allowed/encouraged
relevant questions
or comments

Examinations 0.86
/Testing (Factor 2)
The instructor discussed 0.74 0.55
and answered items
on returned
tests and assignments
The instructor graded 0.62 0.40
and returned tests
within two weeks
The instructor made 0.72 0.52
it clear how
my grade in
the course would
be determined

The instructor 0.75 0.56
applied grading
standards consistently
from student to student

Course Content 0.9
(Factor 3)

The instructor 0.86 0.74
presented content
in an "organized,
logical fashion"
The course covered 0.72 0.52
material consistent
with the stated
objectives
The instructor 0.74 0.55
provided course
materials in a
timely manner
The instructor 0.85 0.72
was well prepared
The instructor stayed 0.84 0.70
on the subject

Variable Variance Variance-
 (Var(([member Extracted
 of].sub.i] Estimate

Communication (Factor 1) 0.65

The instructor 0.15
communicated clearly
and effectively
The instructor was 0.44
willing to provide
extra help as needed
The instructor 0.45
allowed/encouraged
relevant questions
or comments

Examinations 0.51
/Testing (Factor 2)
The instructor discussed 0.45
and answered items
on returned
tests and assignments
The instructor graded 0.60
and returned tests
within two weeks
The instructor made 0.48
it clear how
my grade in
the course would
be determined

The instructor 0.75 0.56 0.44
applied grading
standards consistently
from student to student

Course Content 0.90
(Factor 3)

The instructor 0.86 0.74 0.26
presented content
in an "organized,
logical fashion"
The course covered 0.72 0.52 0.48
material consistent
with the stated
objectives
The instructor 0.74 0.55 0.45
provided course
materials in a
timely manner
The instructor 0.85 0.72 0.28
was well prepared
The instructor stayed 0.84 0.70 0.30
on the subject

Table 2: Means, Standard Deviations, and Correlationsa (a)

Variable Mean Standard 1 2 3 4
 Deviation

Exam 4.59 0.76 (.86)
Communication 4.4 0.88 .78*** (.87)
Course Content 4.46 0.84 .83*** .86*** (.09)
Satisfaction 4.21 1.22 .65*** .79*** .78***
Effectiveness 4.28 1.21 .67*** .78*** .78*** .81***

(a) N = 4186.
Values in parentheses represent reliability coefficients
*** p < .001

Table 3: Moderated Regression Analysis of Classroom
instruction on Satisfaction and Effectiveness with teacher (a)

 Satisfaction

Variables Model 1 Model 2 Model 3

Class .03*** .03** .03**
Term .04** .03** .03**
Section -.03* -.03** -.03**
Instructor -.04*** -.03** -.03**
Exam -.09*** -.36*** -.35***
Communication .49*** .04 .49***
Course Content .42*** .43*** .02*
Communication * .69***
Exam
Course Content * .64***

 Effectiveness

Variables Model 1 Model 2 Model 3

Class .01 .00 .01
Term .01 .00 .01
Section .01 .00 .00
Instructor -.02* -0.01 -0.01
Exam -.04** -.23*** -.22***
Communication .42*** .09** .41***
Course Content .46*** .47*** .17***
Communication * .49***
Exam
Course Content * .46***

Table 3: Moderated Regression Analysis of Classroom instruction
on Satisfaction and Effectiveness with teacher (a)

 Satisfaction

Variables Model 1 Model 2 Model 3

Exam
[R.sup.2] .665 .679 .677
F-Value 1185.21 1103.98 1093.58
[DELTA] [R.sup.2] .014 .012
[DELTA] F-Value 179.94*** 152.08***
df 1, 4177 14,176 14,176

 Effectiveness

Variables Model 1 Model 2 Model 3

Exam
[R.sup.2] .661 .668 .667
F-Value 1162.53 1049.34 1044.91
[DELTA] [R.sup.2] .007 .006
[DELTA] F-Value 88.26*** 76.25***
df 14,177 1,4176 1,4176

***
p < 0.001, **
p <0.05,
* p < .10
(a) Standardized regression coefficients are reported

Table 4: Moderated Regression Analysis of Classroom
instruction on Satisfaction and Effectiveness with teacher (a)
(Instructor-wise analysis)

 Satisfaction

Variables Model 1 Model 2 Model 3

Exam -.36** -.72** -.69**
Communication .84*** .26 .92***
Course Content .48* .32 -.26
Communication * 1.08**
Exam
Course Content * .99**
Exam
[R.sup.2] .94 .949 .947
F-Value 181.3*** 157.93*** 151.69*
[DELTA] R2 .009 .007
[DELTA] F-Value 6.25** 4.74**
df 3,35 1,34 1,34

 Effectiveness

Variables Model 1 Model 2 Model 3

Exam -.09 -.35** -.34*
Communication .74*** .32 .79***
Course Content .34* .23 -.21
Communication * .79**
Exam
Course Content * .74**
Exam
[R.sup.2] .964 .969 .968
F-Value 312.46*** 257.58***
[DELTA] R2 .005 .004
[DELTA] F-Value 5.38** 4.309**
df 3,35 1,34 1,34

***
p < 0.001, **
p < 0.05, *
p < .10
(a) Standardized regression coefficients are reported
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有