首页    期刊浏览 2024年09月21日 星期六
登录注册

文章基本信息

  • 标题:Evaluation Criteria of Information Retrieval Systems: What We Know and What We Do Not Know
  • 本地全文:下载
  • 作者:Hariri, Nadjla ; Babalhavaeji, Fahime ; Farzandipour, Mehrdad
  • 期刊名称:Iranian Journal of Information Processing & Management
  • 印刷版ISSN:2251-8223
  • 电子版ISSN:2251-8231
  • 出版年度:2014
  • 卷号:30
  • 期号:1
  • 页码:199-221
  • 出版社:Iranian Research Institute for Information and Technology
  • 摘要:Evaluation of information retrieval systems is one of the greatest challenges for information science specialists, because determining the performance of a system depends on judgment of the relevance of documents provided by the system to the user's information needs, and it has its own complexities. New retrieval systems due to the dynamic nature of the Web are very different compared with traditional retrieval systems. Web information retrieval systems in terms of ranking results are divided into two groups: The ranked retrieval results and unranked retrieval sets. Each system has a different evaluation criteria and scales. In this paper, criteria for evaluating information retrieval systems is reviewed for the ranked retrieval results (including precision and recall curves, interpolated precision, the 11-point interpolated precision average, Mean Average Precision, precision at K, R-precision, ROC curve cumulative gain and normalized discounted cumulative gain), and the unranked retrieval sets (including precision, recall, F-measure and accuracy) is introduced separately. Finally, alluding to the evaluation metrics, a scale of 4 degrees including best, useful, objective precision and then differential precision is introduced which can be used to evaluate the non-binary precision of web information retrieval systems.
  • 关键词:Information Retrieval ; Precision ; Evaluation ; Evaluation Criterion ; Ranked Retrieval Results ; Unranked Retrieval Sets
国家哲学社会科学文献中心版权所有