Published: 20 September 2023
Harmful actors use an ever-expanding range of digital spaces to spread harmful ideologies and undermine human rights and democracy online. Understanding their evolving ideas, online networks and activities is critical to developing a more comprehensive evidence base to inform effective and proportional efforts to counter them. But creating that evidence base can challenge the technical capabilities, resources, and even ethical and legal boundaries of research. We are concerned that all these may be getting worse, just as the options for spreading harm online increase. It should therefore be of concern that in many instances it is increasingly hard to conduct digital research in a systematic, ethical and legal manner. This results in a situation where difficult trade offs have to be made between competing goods, including the desire to understand and mitigate harmful content and behaviour online, the preservation of privacy and the adherence to legal agreements. We argue in this report that this does not need to be the case; solutions are available, and actions should be taken as soon as possible to ensure that future researchers have the tools to monitor, track and analyse harmful content and behaviour in the manners outlines (systematically, legally and ethically). This report outlines the findings from the research phase of a project by ISD and CASM Technology, and funded by Omidyar Network. The aim of the project is to identify and test research methodologies to monitor and analyse small, closed or hardly moderated platforms. It provides applied examples and evidence for the limitations and dilemmas encountered by researchers. In three small research case studies, focusing on Telegram, Discord and Odysee in German, English and French respectively, we seek to apply different methodological approaches to analyse platforms that primarily present technological, ethical and legal, or fragmentation barriers.