Cracking the Code: An Evaluation of the EU Code of Practice on Disinformation

Published: 26th June 2020
Written By: Chloe Colliver

In October 2018, Facebook, Google, Twitter, Mozilla and a selection of advertising industry companies signed up to the newly drafted EU Code of Practice on Disinformation (CoPD). This assessment attempts to evaluate the enforcement of the CoPD during the EU parliamentary elections. It finds that the CoPD prompted progress from tech companies in dealing with specific areas of disinformation risk, most notably transparency for political advertising. However, the effectiveness of the CoPD in achieving substantive changes was fundamentally challenged by its self-regulatory set-up and the lack of enforcement in place for non-compliance.

This report provides a set of recommendations, which seek to inform the continuing efforts to counter disinformation and online harms at the EU level through the upcoming Digital Services Act and European Democracy Action Plan in 2020 and beyond. The Institute for Strategic Dialogue (ISD) calls for policymakers in the EU to design and enforce systemic transparency for advertising, content moderation, appeals and redress systems, and algorithmic design and output in order to address the risks posed by disinformation in the European context.

This assessment is based on research conducted by ISD and additional insights from partner research organisations who evaluated the presence and scale of disinformation targeting the European parliamentary elections of May 2019 on the social media platforms that signed up to the CoPD. The full methods and findings of that research are available in the accompanying report, Click Here for Outrage: Disinformation in the European Parliamentary Elections 2019.

Hosting the ‘Holohoax’: A Snapshot of Holocaust Denial Across Social Media

This briefing paper examines the extent to which Holocaust denial content is readily accessible across Facebook, Twitter, Reddit and YouTube. This paper also demonstrates how appropriately applied content moderation policies can be effective in denying dangerous conspiracy theorists a public platform by examining how Holocaust denial content has decreased significantly in the past year on YouTube.

Developing a Civil Society Response to Online Manipulation

This document presents a vision for a pan-civil societal response to online manipulation. In part, it argues, this will come down to capability: building a pooled detection capacity to function as a transparent, public interest alter­native to those built by the tech giants. In part, it will require new organisational philosophies and forms of co-operation, and in part new approaches to funding and support.

The 101 of Disinformation Detection

Disinformation can threaten the activities, objectives and individuals associated with civil society groups and their work. This toolkit lays out an approach that organisations can undertake to begin to track online disinformation on subjects that they care about. The process is intended to have a very low barrier to entry, with each stage achievable using either over-the-counter or free-to-use social media analysis tools.