首页    期刊浏览 2025年06月26日 星期四
登录注册

文章基本信息

  • 标题:An overview of three approaches to scoring written essays by computer
  • 本地全文:下载
  • 作者:Rudner, Lawrence ; Phill Gagne
  • 期刊名称:Practical Assessment, Research and Evaluation
  • 印刷版ISSN:1531-7714
  • 电子版ISSN:1531-7714
  • 出版年度:2001
  • 卷号:7
  • 出版社:ERIC: Clearinghouse On Assessment and Evaluation
  • 摘要:It is not surprising that extended-response items, typically short essays, are now an integral part of most large-scale assessments. Extended response items provide an opportunity for students to demonstrate a wide range of skills and knowledge, including higher order thinking skills such as synthesis and analysis. Yet assessing students' writing is one of the most expensive and time-consuming activities for assessment programs. Prompts need to be designed, rubrics created, multiple raters need to be trained, and then the extended responses need to be scored, typically by multiple raters. With different people evaluating different essays, interrater reliability becomes an additional concern in the writing assessment process. Even with rigorous training, differences in the background, training, and experience of the raters can lead to subtle but important differences in grading (Blok & de Glopper, 1992, Rudner, 1992). Computers and artificial intelligence have been proposed as tools to facilitate the evaluation of student essays. In theory, computer scoring can be faster, reduce costs, increase accuracy and eliminate concerns about rater consistency and fatigue. Further, the computer can quickly rescore materials should the scoring rubric be redefined. This articles describes the three most prominent approaches to essay scoring.
国家哲学社会科学文献中心版权所有