摘要:AbstractIntrusion detection plays an important role in public security domains. Dynamic deep neural network(DNN)-based intrusion detection models have been demonstrated to show effective performance for timely detecting network intrusions. While DNN-based intrusion detection models have shown powerful performance, in this paper, we verify that they could be easily attacked by well-designed small adversarial perturbations. We design an effective procedure to employ commonly used adversarial perturbations for attacking well-trained DNN detection models on NSL-KDD dataset. We further find that the performance of DNN models for recognizing real labels of abnormal data suffers more from attacks compared with that on normal samples.