首页    期刊浏览 2024年11月26日 星期二
登录注册

文章基本信息

  • 标题:The Role of Human Knowledge in Explainable AI
  • 本地全文:下载
  • 作者:Andrea Tocchetti ; Marco Brambilla
  • 期刊名称:Data
  • 印刷版ISSN:2306-5729
  • 出版年度:2022
  • 卷号:7
  • 期号:7
  • 页码:1-20
  • DOI:10.3390/data7070093
  • 语种:English
  • 出版社:MDPI Publishing
  • 摘要:As the performance and complexity of machine learning models have grown significantlyover the last years, there has been an increasing need to develop methodologies to describe theirbehaviour. Such a need has mainly arisen due to the widespread use of black-box models, i.e.,high-performing models whose internal logic is challenging to describe and understand. Therefore,the machine learning and AI field is facing a new challenge: making models more explainablethrough appropriate techniques. The final goal of an explainability method is to faithfully describethe behaviour of a (black-box) model to users who can get a better understanding of its logic, thusincreasing the trust and acceptance of the system. Unfortunately, state-of-the-art explainabilityapproaches may not be enough to guarantee the full understandability of explanations from a humanperspective. For this reason, human-in-the-loop methods have been widely employed to enhanceand/or evaluate explanations of machine learning models. These approaches focus on collectinghuman knowledge that AI systems can then employ or involving humans to achieve their objectives(e.g., evaluating or improving the system). This article aims to present a literature overview oncollecting and employing human knowledge to improve and evaluate the understandability ofmachine learning models through human-in-the-loop approaches. Furthermore, a discussion on thechallenges, state-of-the-art, and future trends in explainability is also provided.
  • 关键词:explainable AI;human-in-the-loop;human knowledge;explainability;traceability;interpretation;understandability;machine learning;blackbox algorithms
国家哲学社会科学文献中心版权所有