Neue Plattformen und Technologien: Ein Überblick zur aktuellen Bedrohungslage und ihrer politischen Implikationen

Authors: Mauritius Dorn, Sara Bundtzen, Christian Schwieter, Milan Gandhi

Published: 17 October 2023

This report is in German, but is also available in English

This policy paper was produced as part of the project Digital Policy Lab (DPL), funded by the German Federal Foreign Office. The responsibility for the content lies exclusively with ISD Germany. The DPL is an inter-governmental working group focused on charting the policy path forward to prevent and counter disinformation, hate and extremism. It is comprised of representatives of relevant ministries and regulatory bodies from liberal democracies. As part of the DPL, ISD organised several working group meetings on the topic of emerging platforms and technologies between May and June 2023. While participants contributed to this publication, the views expressed in this report do not necessarily reflect the views of all participants or any governments involved in this project.

As the digital information space becomes more decentralised, generative and immersive, the severity and likelihood of risks of harm will also evolve. It is still too early to predict the exact changes, but some relevant trends can already be observed. This policy paper provides an overview of relevant findings on the risks of harm of emerging platforms and technologies, and identifies a series of policy implications, including the following recommendations:

  • Policymakers and regulators must clarify which existing regulatory regimes apply to decentralised social web services, and which approaches to enforcement are applicable. Requirements for service providers to appoint in-country representatives can be considered an important policy element to achieve initial accountability. Policy enforcement must rely on improved international coordination and public pressure, as well as regulators proactively supporting provider compliance (e.g., through the development of compliance plugins).
  • Risks of harm from large language models (LLMs) can be experienced in a variety of consumer-facing applications. To address this policymakers and regulators must define new rules for access, accountability, liability, safety and detection of LLMs. Self-regulation can provide an interim approach until new rules come into force. At the same time, regulators must be aware of the tactics and techniques used by malign actors to exploit LLMs for their strategies (e.g., for information manipulation).
  • Policymakers and regulators must define what constitutes risks of harm in extended reality (XR) environments and ensure there is an applicable regulatory or co-regulatory framework in place. To this end, they must review existing platform and technology regulations (e.g., the EU’s Digital Services Act and AI Act, Australia’s Online Safety Act 2021), and national criminal codes and potentially develop an XR specific harms and crime taxonomies. In addition, standards for evidence gathering, reporting and moderation must be developed in a multistakeholder dialogue.
  • In the context of the convergence of new platforms and technologies, policymakers and regulators must work closely together with academia, civil society and industry from different sectors to gain a holistic and deep understanding of the evolving threat landscape. Moreover, they will need to continuously adapt their initiatives to mitigate the respective risks of harm and seek ways to enforce already existing policies.

Mauritius Dorn is a Senior Digital Policy and Education Manager at ISD Germany. He leads the Project AHEAD – a dialogue series to provide an integrated understanding of hybrid threats with a focus on disinformation. He also supports the DPL.

Sara Bundtzen is an Analyst at ISD Germany, where she studies the spread of information manipulation in multilingual online environments. As part of the DPL, Sara analyses policy pathways toward countering disinformation, hate and extremism.

Christian Schwieter is an ISD Fellow and PhD candidate at the Department of Media Studies at Stockholm University. Until 2023, he was a Project Manager at ISD Germany, leading the research project ‘Countering Radicalisation in Right-Wing Extremist Online Subcultures’.

Milan Gandhi is a Research Fellow (AI and Public Policy) at ISD and a Master’s candidate at the University of Oxford. He is also the founder of Legal Forecast, a not-for-profit organisation exploring the intersection of law and new technologies.

DOWNLOAD THE REPORT