首页    期刊浏览 2024年10月06日 星期日
登录注册

文章基本信息

  • 标题:Can Automated Scoring Surpass Hand Grading of Students' Constructed Responses and Error Patterns in Mathematics?,
  • 本地全文:下载
  • 作者:Nava L. Livne ; Oren E. Livne ; Charles A. Wight
  • 期刊名称:Journal of Online Learning and Teaching
  • 印刷版ISSN:1558-9528
  • 电子版ISSN:1558-9528
  • 出版年度:2007
  • 卷号:3
  • 期号:03
  • 出版社:Multimedia Educational Resource for Learning and Online Teaching
  • 摘要:A unique online parsing system that produces partial-credit scoring of students’ constructed responses to mathematical questions is presented. The parser is the core of a free college readiness website in mathematics. The software generates immediate error analysis for each student response. The response is scored on a continuous scale, based on its overall correctness and the fraction of correct elements. The parser scoring was validated against human scoring of 207 real-world student responses (r = 0.91). Moreover, the software generates more consistent scores than teachers in some cases. The parser analysis of students’ errors on 124 additional responses showed that the errors were factored into two groups: structural (possibly conceptual), and computational (could result from typographical errors). The two error groups explained 55% of students’ scores variance (structural errors: 36%; computational errors: 19%). In contrast, these groups explained only 33% of the teacher score variance (structural: 18%; computational: 15%). There was a low agreement among teachers on error classification, and their classification was weakly correlated to the parser’s error groups. Overall, the parser’s total scoring closely matched human scoring, but the machine was found to surpass humans in systematically distinguishing between students’ error patterns.
  • 关键词:parser, assessment, automated partial-credit scoring, computer grading, error analysis, online learning, artificial intelligence, natural languages
国家哲学社会科学文献中心版权所有