首页    期刊浏览 2024年11月29日 星期五
登录注册

文章基本信息

  • 标题:COMPARISON OF PERFORMANCE MEASURES OBTAINED FROM FOREIGN LANGUAGE TESTS ACCORDING TO ITEM RESPONSE THEORY VS CLASSICAL TEST THEORY
  • 本地全文:下载
  • 作者:Murat Polat ; Murat Polat
  • 期刊名称:International Online Journal of Education and Teaching
  • 电子版ISSN:2148-225X
  • 出版年度:2022
  • 卷号:9
  • 期号:1
  • 页码:471-485
  • 语种:English
  • 出版社:Informascope
  • 摘要:Foreign language testing is a multi-dimensional phenomenon and obtaining objective and error-free scores on learners’ language skills is often problematic. While assessing foreign language performance on high-stakes tests, using different testing approaches including Classical Test Theory (CTT), Generalizability Theory (GT) and/or Item Response Theory (IRT) may help both to obtain results closer to true scores on students’ proficiency levels and to minimize the amount of possible error on these measurement results, depending on the item numbers, test time and the effort spent for the evaluation. In this study, two popular testing theories the CTT and IRT were compared in language proficiency testing. Multidimensionality of two multiple-choice language tests taken by 2032 low-int and intermediate level language students in the spring term of 2018-2019 academic year was examined via CTT and IRT. DIMTEST (Dimensionality Test) results revealed that both language tests were two-dimensional. As a result of the NOHARM test, carried out to analyze which item response theory model the data fit finest, it was seen that the language test data fit the 3-parameter-logistic model. Eventually, it was determined that the relationships between foreign language proficiency estimations based on CTT and IRT varied between 0.806 and 0.891 in two different test booklets. Thus, it was concluded that although the two theory-based-language proficiency estimations were similar, they should not be used interchangeably.
国家哲学社会科学文献中心版权所有