首页    期刊浏览 2024年11月26日 星期二
登录注册

文章基本信息

  • 标题:How humans impair automated deception detection performance
  • 本地全文:下载
  • 作者:Bennett Kleinberg ; Bruno Verschuere
  • 期刊名称:Acta Psychologica
  • 印刷版ISSN:0001-6918
  • 电子版ISSN:1873-6297
  • 出版年度:2021
  • 卷号:213
  • 页码:103250
  • 语种:English
  • 出版社:Elsevier
  • 摘要:Background: Deception detection is a prevalent problem for security practitioners. With a need for more large-scale approaches, automated methods using machine learning have gained traction. However, detection performance still implies considerable error rates. Findings from different domains suggest that hybrid human-machine integrations could offer a viable path in detection tasks. Method: We collected a corpus of truthful and deceptive answers about participants' autobiographical intentions (n = 1640) and tested whether a combination of supervised machine learning and human judgment could improve deception detection accuracy. Human judges were presented with the outcome of the automated credibility judgment of truthful or deceptive statements. They could either fully overrule it (hybrid-overrule condition) or adjust it within a given boundary (hybrid-adjust condition). Results: The data suggest that in neither of the hybrid conditions did the human judgment add a meaningful contribution. Machine learning in isolation identified truth-tellers and liars with an overall accuracy of 69%. Human involvement through hybrid-overrule decisions brought the accuracy back to chance level. The hybrid-adjust condition did not improve deception detection performance. The decision-making strategies of humans suggest that the truth bias - the tendency to assume the other is telling the truth - could explain the detrimental effect. Conclusions: The current study does not support the notion that humans can meaningfully add the deception detection performance of a machine learning system. All data are available at https://osf.io/45z7e/.
国家哲学社会科学文献中心版权所有