首页    期刊浏览 2024年11月30日 星期六
登录注册

文章基本信息

  • 标题:Adversarial Attacks Impact on the Neural Network Performance and Visual Perception of Data under Attack
  • 本地全文:下载
  • 作者:Yakov Usoltsev ; Balzhit Lodonova ; Alexander Shelupanov
  • 期刊名称:Information
  • 电子版ISSN:2078-2489
  • 出版年度:2022
  • 卷号:13
  • 期号:2
  • 页码:77
  • DOI:10.3390/info13020077
  • 语种:English
  • 出版社:MDPI Publishing
  • 摘要:Machine learning algorithms based on neural networks are vulnerable to adversarial attacks. The use of attacks against authentication systems greatly reduces the accuracy of such a system, despite the complexity of generating a competitive example. As part of this study, a white-box adversarial attack on an authentication system was carried out. The basis of the authentication system is a neural network perceptron, trained on a dataset of frequency signatures of sign. For an attack on an atypical dataset, the following results were obtained: with an attack intensity of 25%, the authentication system availability decreases to 50% for a particular user, and with a further increase in the attack intensity, the accuracy decreases to 5%.
国家哲学社会科学文献中心版权所有