首页    期刊浏览 2024年12月01日 星期日
登录注册

文章基本信息

  • 标题:Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate
  • 本地全文:下载
  • 作者:David A. Broniatowski ; Amelia M. Jamison ; SiHua Qi
  • 期刊名称:American journal of public health
  • 印刷版ISSN:0090-0036
  • 出版年度:2018
  • 卷号:108
  • 期号:10
  • 页码:1378-1384
  • DOI:10.2105/AJPH.2018.304567
  • 语种:English
  • 出版社:American Public Health Association
  • 摘要:Objectives. To understand how Twitter bots and trolls (“bots”) promote online health content. Methods. We compared bots’ to average users’ rates of vaccine-relevant messages, which we collected online from July 2014 through September 2017. We estimated the likelihood that users were bots, comparing proportions of polarized and antivaccine tweets across user types. We conducted a content analysis of a Twitter hashtag associated with Russian troll activity. Results. Compared with average users, Russian trolls (χ2(1) = 102.0; P < .001), sophisticated bots (χ2(1) = 28.6; P < .001), and “content polluters” (χ2(1) = 7.0; P < .001) tweeted about vaccination at higher rates. Whereas content polluters posted more antivaccine content (χ2(1) = 11.18; P < .001), Russian trolls amplified both sides. Unidentifiable accounts were more polarized (χ2(1) = 12.1; P < .001) and antivaccine (χ2(1) = 35.9; P < .001). Analysis of the Russian troll hashtag showed that its messages were more political and divisive. Conclusions. Whereas bots that spread malware and unsolicited content disseminated antivaccine messages, Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination. Public Health Implications. Directly confronting vaccine skeptics enables bots to legitimize the vaccine debate. More research is needed to determine how best to combat bot-driven content. Health-related misconceptions, misinformation, and disinformation spread over social media, posing a threat to public health. 1 Despite significant potential to enable dissemination of factual information, 2 social media are frequently abused to spread harmful health content, 3 including unverified and erroneous information about vaccines. 1,4 This potentially reduces vaccine uptake rates and increases the risks of global pandemics, especially among the most vulnerable. 5 Some of this information is motivated: skeptics use online platforms to advocate vaccine refusal. 6 Antivaccine advocates have a significant presence in social media, 6 with as many as 50% of tweets about vaccination containing antivaccine beliefs. 7 Proliferation of this content has consequences: exposure to negative information about vaccines is associated with increased vaccine hesitancy and delay. 8–10 Vaccine-hesitant parents are more likely to turn to the Internet for information and less likely to trust health care providers and public health experts on the subject. 9,11 Exposure to the vaccine debate may suggest that there is no scientific consensus, shaking confidence in vaccination. 12,13 Additionally, recent resurgences of measles, mumps, and pertussis and increased mortality from vaccine-preventable diseases such as influenza and viral pneumonia 14 underscore the importance of combating online misinformation about vaccines. Much health misinformation may be promulgated by “bots” 15 —accounts that automate content promotion—and “trolls” 16 —individuals who misrepresent their identities with the intention of promoting discord. One commonly used online disinformation strategy, amplification, 17 seeks to create impressions of false equivalence or consensus through the use of bots and trolls. We seek to understand what role, if any, they play in the promotion of content related to vaccination. Efforts to document how unauthorized users—including bots and trolls—have influenced online discourse about vaccines have been limited. DARPA’s (the US Defense Advanced Research Projects Agency) 2015 Bot Challenge charged researchers with identifying “influence bots” on Twitter in a stream of vaccine-related tweets. The teams effectively identified bot networks designed to spread vaccine misinformation, 18 but the public health community largely overlooked the implications of these findings. Rather, public health research has focused on combating online antivaccine content, with less focus on the actors who produce and promote this content. 1,19 Thus, the role of bots’ and trolls’ online activity pertaining to vaccination remains unclear. We report the results of a retrospective observational study assessing the impact of bots and trolls on online vaccine discourse on Twitter. Using a set of 1 793 690 tweets collected from July 14, 2014, through September 26, 2017, we quantified the impact of known and suspected Twitter bots and trolls on amplifying polarizing and antivaccine messages. This analysis is supplemented by a qualitative study of #VaccinateUS—a Twitter hashtag designed to promote discord using vaccination as a political wedge issue. #VaccinateUS tweets were uniquely identified with Russian troll accounts linked to the Internet Research Agency—a company backed by the Russian government specializing in online influence operations. 20 Thus, health communications have become “weaponized”: public health issues, such as vaccination, are included in attempts to spread misinformation and disinformation by foreign powers. In addition, Twitter bots distributing malware and commercial content (i.e., spam) masquerade as human users to distribute antivaccine messages. A full 93% of tweets about vaccines are generated by accounts whose provenance can be verified as neither bots nor human users yet who exhibit malicious behaviors. These unidentified accounts preferentially tweet antivaccine misinformation. We discuss implications for online public health communications.
国家哲学社会科学文献中心版权所有