首页    期刊浏览 2025年08月16日 星期六
登录注册

文章基本信息

  • 标题:Malware Detection for Forensic Memory Using Deep Recurrent Neural Networks
  • 本地全文:下载
  • 作者:Ioannis Karamitsos ; Aishwarya Afzulpurkar ; Theodore B. Trafalis
  • 期刊名称:Journal of Information Security
  • 印刷版ISSN:2153-1234
  • 电子版ISSN:2153-1242
  • 出版年度:2020
  • 卷号:11
  • 期号:2
  • 页码:103-120
  • DOI:10.4236/jis.2020.112007
  • 出版社:Scientific Research Publishing
  • 摘要:Memory forensics is a young but fast-growing area of research and a promising one for the field of computer forensics. The learned model is proposed to reside in an isolated core with strict communication restrictions to achieve incorruptibility as well as efficiency, therefore providing a probabilistic memory-level view of the system that is consistent with the user-level view. The lower level memory blocks are constructed using primary block sequences of varying sizes that are fed as input into Long-Short Term Memory (LSTM) models. Four configurations of the LSTM model are explored by adding bi- directionality as well as attention. Assembly level data from 50 Windows portable executable (PE) files are extracted, and basic blocks are constructed using the IDA Disassembler toolkit. The results show that longer primary block sequences result in richer LSTM hidden layer representations. The hidden states are fed as features into Max pooling layers or Attention layers, depending on the configuration being tested, and the final classification is performed using Logistic Regression with a single hidden layer. The bidirectional LSTM with Attention proved to be the best model, used on basic block sequences of size 29. The differences between the model’s ROC curves indicate a strong reliance on the lower level, instructional features, as opposed to metadata or string features.
  • 关键词:BiLSTM;Deep Learning;Forensic Memory;LSTM;RNN
国家哲学社会科学文献中心版权所有