This 2013 book aims to inform government decision-makers, security analysts, and activists on how to use the social world to improve security locally, nationally, globally, and cost-effectively.
This 2013 article analyzes content generated on Twitter during the attacks at the Boston Marathon in April 2013. The authors perform an in-depth characterization of what factors influenced malicious content and profiles becoming viral. The authors also used a regression prediction model to verify that the overall impact of all users who propagate fake content at a given time can be used to estimate the growth of that fake content in the future. In examining fake content around the Boston bombings, the authors were able to identify over six thousand malicious user accounts—many of which were later suspended by Twitter—and observed a surge in the creation of such profiles right after the blasts occurred.
This article will be useful to practitioners and researchers interested in the characterization and/or analysis of content during real-time events and both content detection and community identification methods for Twitter and other social media platforms. The authors were able to identify a closed community structure and star formation in the interaction network of suspended profiles. Social media is now one of the prominent channels for dissemination of information during real world events. Malicious content is posted online during events, which can result in damage, chaos and monetary losses in the real world. The authors found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content. The study demonstrated that 29% of the most viral content on Twitter during the Boston crisis consisted of rumours and fake content, 51% was generic opinions and comments, with only the remaining 20% consisting of true information.