ISD Written Evidence to the Science, Innovation and Technology Committee Inquiry, on Social Media, Misinformation and Harmful Algorithms

Published: 20 January 2025

This document contains written evidence compiled by the Institute for Strategic Dialogue (ISD) providing insight into the various harms associated with social media algorithms, the role these played in the 2024 UK riots, and regulatory approaches for effectively countering these harms.

This evidence examines the extent to which the business models of social media companies, search engines, and similar entities encourage the spread of harmful content and contribute to broader social harms. It explores how these companies use algorithms to rank content, the relationship between their business models and the dissemination of misinformation, disinformation and harmful content, and the role of generative artificial intelligence (AI) and large language models (LLMs) in creating and amplifying such content.

The evidence also considers the influence of social media algorithms on the riots that occurred in the UK during the summer of 2024, alongside an evaluation of the effectiveness of the UK’s regulatory and legislative framework in addressing these challenges. This includes an assessment of the UK’s Online Safety Act’s potential impact, the need for further measures to tackle harmful content, and the roles of regulatory bodies such as Ofcom in preventing the spread of false and harmful content online. Additionally, the evidence addresses accountability for the spread of misinformation, disinformation, and harmful content arising from the use of algorithms and AI by social media and search engine companies.

DOWNLOAD THE REPORT