首页    期刊浏览 2024年11月23日 星期六
登录注册

文章基本信息

  • 标题:An Automated Toxicity Classification on Social Media Using LSTM and Word Embedding
  • 本地全文:下载
  • 作者:Ahmad Alsharef ; Karan Aggarwal ; null Sonia
  • 期刊名称:Computational Intelligence and Neuroscience
  • 印刷版ISSN:1687-5265
  • 电子版ISSN:1687-5273
  • 出版年度:2022
  • 卷号:2022
  • DOI:10.1155/2022/8467349
  • 语种:English
  • 出版社:Hindawi Publishing Corporation
  • 摘要:The automated identification of toxicity in texts is a crucial area in text analysis since the social media world is replete with unfiltered content that ranges from mildly abusive to downright hateful. Researchers have found an unintended bias and unfairness caused by training datasets, which caused an inaccurate classification of toxic words in context. In this paper, several approaches for locating toxicity in texts are assessed and presented aiming to enhance the overall quality of text classification. General unsupervised methods were used depending on the state-of-art models and external embeddings to improve the accuracy while relieving bias and enhancing F1-score. Suggested approaches used a combination of long short-term memory (LSTM) deep learning model with Glove word embeddings and LSTM with word embeddings generated by the Bidirectional Encoder Representations from Transformers (BERT), respectively. These models were trained and tested on large secondary qualitative data containing a large number of comments classified as toxic or not. Results found that acceptable accuracy of 94% and an F1-score of 0.89 were achieved using LSTM with BERT word embeddings in the binary classification of comments (toxic and nontoxic). A combination of LSTM and BERT performed better than both LSTM unaccompanied and LSTM with Glove word embedding. This paper tries to solve the problem of classifying comments with high accuracy by pertaining models with larger corpora of text (high-quality word embedding) rather than the training data solely.
国家哲学社会科学文献中心版权所有