期刊名称:International Journal of Advanced Computer Science and Applications(IJACSA)
印刷版ISSN:2158-107X
电子版ISSN:2156-5570
出版年度:2021
卷号:12
期号:5
页码:152
DOI:10.14569/IJACSA.2021.0120519
出版社:Science and Information Society (SAI)
摘要:Deep learning is one of the most remarkable artificial intelligence trends. It remains behind numerous recent achievements in various domains, such as speech processing, and computer vision, to mention a few. Likewise, these achievements have sparked great attention in utilizing deep learning for dimension reduction. It is known that the deep learning algorithms built on neural networks contain number of hidden layers, activation function and optimizer, which make the computation of deep neural network challenging and, sometimes, complex. The reason for this complexity is that obtaining an outstanding and consistent result from such deep architecture requires identifying number of hidden layers and suitable activation function for dimension reduction. To investigate the aforementioned issues linear and non-linear activation functions are chosen for dimension reduction using Stacked Autoencoder (SAE) when applied to Network Intrusion Detection Systems (NIDS). To conduct experiments for this study various activation functions like linear, Leaky ReLU, ELU, Tanh, sigmoid and softplus have been identified for the hidden and output layers. Adam optimizer and Mean Square Error loss functions are adopted for optimizing the learning process. The SVM-RBF classifier is applied to assess the classification accuracies of these activation functions by using CICIDS2017 dataset because it contains contemporary attacks on cloud environment. The performance metrics such as accuracy, precision, recall and F-measure are evaluated along with theses classification time is being considered as an important metric. Finally it is concluded that ELU is performed with low computational overhead with negligible difference of accuracy that is 97.33% when compared to other activation functions.