首页    期刊浏览 2025年04月28日 星期一
登录注册

文章基本信息

  • 标题:Analysis of Security of Machine Learning and a proposition of assessment pattern to deal with adversarial attacks
  • 本地全文:下载
  • 作者:Asmaa Ftaimi ; Tomader Mazri
  • 期刊名称:E3S Web of Conferences
  • 印刷版ISSN:2267-1242
  • 电子版ISSN:2267-1242
  • 出版年度:2021
  • 卷号:229
  • 页码:1004
  • DOI:10.1051/e3sconf/202122901004
  • 出版社:EDP Sciences
  • 摘要:Today, Machine Learning is being rolled out in a variety of areas. It is a promising field that can offer several assets and can revolutionize several aspects of technology. Nevertheless, despite the advantages of machine learning technologies, learning algorithms can be exploited by attackers to carry out illicit activities. Therefore, the field of security of machine learning is deriving attention in these times so as to meet this challenge and develop secure learning models. In this paper, we overview a taxonomy that will help us understand and analyze the security of machine learning models. In the next sections, we conduct a comparative study of most widespread adversarial attacks then, we analyze common methods that were advanced to protect systems built on Machine learning models from adversaries. Finally, we discuss a proposition of a pattern designed to ensure a security assessment of machine learning models.
  • 其他摘要:Today, Machine Learning is being rolled out in a variety of areas. It is a promising field that can offer several assets and can revolutionize several aspects of technology. Nevertheless, despite the advantages of machine learning technologies, learning algorithms can be exploited by attackers to carry out illicit activities. Therefore, the field of security of machine learning is deriving attention in these times so as to meet this challenge and develop secure learning models. In this paper, we overview a taxonomy that will help us understand and analyze the security of machine learning models. In the next sections, we conduct a comparative study of most widespread adversarial attacks then, we analyze common methods that were advanced to protect systems built on Machine learning models from adversaries. Finally, we discuss a proposition of a pattern designed to ensure a security assessment of machine learning models.
国家哲学社会科学文献中心版权所有