首页    期刊浏览 2025年02月19日 星期三
登录注册

文章基本信息

  • 标题:Attacking DNN-based Intrusion Detection Models
  • 本地全文:下载
  • 作者:Xingwei Zhang ; Xiaolong Zheng ; Desheng Dash Wu
  • 期刊名称:IFAC PapersOnLine
  • 印刷版ISSN:2405-8963
  • 出版年度:2020
  • 卷号:53
  • 期号:5
  • 页码:415-419
  • DOI:10.1016/j.ifacol.2021.04.118
  • 语种:English
  • 出版社:Elsevier
  • 摘要:AbstractIntrusion detection plays an important role in public security domains. Dynamic deep neural network(DNN)-based intrusion detection models have been demonstrated to show effective performance for timely detecting network intrusions. While DNN-based intrusion detection models have shown powerful performance, in this paper, we verify that they could be easily attacked by well-designed small adversarial perturbations. We design an effective procedure to employ commonly used adversarial perturbations for attacking well-trained DNN detection models on NSL-KDD dataset. We further find that the performance of DNN models for recognizing real labels of abnormal data suffers more from attacks compared with that on normal samples.
  • 关键词:Keywordspublic securityintrusion detectiondeep neural networksadversarial perturbations
国家哲学社会科学文献中心版权所有