The First 100 Days: Coronavirus and Crisis Management on Social Media Platforms

Published: 15th May 2020

Since January 2020, COVID-19 has become the perfect crucible for online harms. Pandemics are by their nature fast-moving, with constantly evolving information even from credible and expert sources. This is set against a backdrop of heightened fear and anxiety, where valid concerns over resource scarcity, economic fallout and personal safety merge with extremist views on race and social order. New conspiracies and coordinated disinformation efforts have exploded online, preying on the uncertainty of this moment and the ambiguity regarding the source and spread of the disease worldwide.

The disinformation crisis surrounding COVID-19 is not an abstract problem. Online content can catalyse real-world harm, and research is already documenting the risks of COVID-19 disinformation to public health and safety. Countries across the globe have seen a spike in anti-Asian, anti-Semitic and other targeted hate, often directly citing or fuelled by conspiracies surrounding the virus’ origin and transfer. At the same time, debunked theories related to 5G have spurred violent attacks against telecoms infrastructure and related personnel in the UK, Ireland, Belgium and the Netherlands. Conspiracy theories have not only sparked protests in the US, Australia, Germany and the UK (to cite just a few), but are helping promote scepticism and distrust in any future vaccine that might curb the virus’ spread. If such trends continue, they will hinder any efforts to keep the public safe and well- informed.

This report offers an interim review of responses to the COVID-19 ‘infodemic’ from three major technology companies – Facebook, Google and Twitter – from March to May 2020. These platforms have been forced to mobilise at speed, trialling policies and enforcement approaches that can meet such a challenge. The briefing summarises the approaches taken by respective teams at Twitter, Facebook, WhatsApp, Instagram, Google and YouTube, including specific services and policies introduced in recent months and, where possible, the accompanying rationale from companies themselves.

Hosting the ‘Holohoax’: A Snapshot of Holocaust Denial Across Social Media

This briefing paper examines the extent to which Holocaust denial content is readily accessible across Facebook, Twitter, Reddit and YouTube. This paper also demonstrates how appropriately applied content moderation policies can be effective in denying dangerous conspiracy theorists a public platform by examining how Holocaust denial content has decreased significantly in the past year on YouTube.

Developing a Civil Society Response to Online Manipulation

This document presents a vision for a pan-civil societal response to online manipulation. In part, it argues, this will come down to capability: building a pooled detection capacity to function as a transparent, public interest alter­native to those built by the tech giants. In part, it will require new organisational philosophies and forms of co-operation, and in part new approaches to funding and support.

The 101 of Disinformation Detection

Disinformation can threaten the activities, objectives and individuals associated with civil society groups and their work. This toolkit lays out an approach that organisations can undertake to begin to track online disinformation on subjects that they care about. The process is intended to have a very low barrier to entry, with each stage achievable using either over-the-counter or free-to-use social media analysis tools.