Banning RT and Sputnik Across Europe: What Does it Hold for the Future of Platform Regulation?

5 April 2022

By Sara Bundtzen & Mauritius Dorn

Since Russia’s invasion of Ukraine, ISD monitoring has shown how pro-Kremlin propaganda and disinformation is continuously and overtly spread online. The swift action taken by the European Commission and online platforms to restrict access to Russian state-owned outlets creates questions for the future of platform regulation. 

This Dispatch looks at digital policy developments over the course of the Russia Today (RT) and Sputnik ban, offering recommendations for how platforms and governments can respond to propaganda and information operations, taking into account upcoming legislation in the EU and beyond. 

_________________________________________________________________________________ 

Russian propaganda permeates online debates

Russia’s overt propaganda activities have been channelled through state-owned outlets, including RT and Sputnik. Pro-Kremlin content from these outlets across multiple languages that legitimises and supports Russia’s military aggression, including website links, articles and posts have been shared on social media, as well as via messenger services.

ISD analysis has shown how pro-Kremlin disinformation operations have found sympathetic audiences on social media across Europe. On TikTok, RT and Sputnik spread disinformation that describes Ukraine as the aggressor and frames its soldiers and political leaders as Nazis. On Facebook, ISD found that German-language COVID-19 sceptic, anti-vaxxer, far-right and right-wing populist groups posted content from RT DE more frequently than any other news website. This included disinformation about Russia’s invasion of Ukraine being an operation to “denazify” the country and protect civilians. On Telegram, conspiracy theorist and right-wing extremist channels and groups have shared RT and Sputnik articles that repeat Putin’s narrative of a ‘genocide’ in Ukraine against Russians.

Ban of RT and Sputnik: Ambiguity of scope?

On 2 March 2022, the EU imposed restrictive measures to suspend the broadcasting activities of Sputnik and RT in the EU. The regulation noted that these measures were in response to the “gravity of the situation” and that they are “consistent with the fundamental rights and freedoms recognised in the Charter of Fundamental Rights, in particular with the right to freedom of expression and information.” The suspension of RT and Sputnik is temporary and will be maintained “until the aggression against Ukraine is put to an end.” 

Specifically, the regulation prohibits operators “to broadcast or to enable, facilitate or otherwise contribute to broadcast, any content by [RT and Sputnik].” This includes transmission or distribution by any means such as cable, satellite, IP-TV, service providers, video-sharing platforms or applications. Any broadcasting licence or authorisation, transmission and distribution arrangements must be suspended. 

In the course of the suspension, online platforms have taken action and restricted access to RT and Sputnik. For example, YouTube has blocked access to Russian state-funded media globally. Google Play has widely removed apps from Russian state-funded media beyond just RT and Sputnik in the EU. Telegram has begun blocking content from Sputnik and RT for users with an EU phone number. Meta has blocked access to RT and Sputnik across the EU and UK.

While the ban of RT and Sputnik seems straightforward at first glance, a closer look raises considerable questions about its scope. 

Legal expert Dr Björnstjern Baade of the Free University of Berlin points out that the regulation itself does not define the term ‘broadcasting’. Instead, it is necessary to refer to the EU Audiovisual Media Services Directive (AVMSD), in which broadcasting is defined as a “linear audiovisual media service”. According to this legal understanding, the ban covers the distribution of linear services of RT and Sputnik online (such as  television and radio services, but not the on-demand streaming or publishing of content by individual users). 

The European Commission appears to be pursuing a broader interpretation in enforcing the regulation. In a content removal request to Google, the Commission states that the regulation “intends to set out a very broad and comprehensive prohibition” to ensure that RT and Sputnik content does not appear in search engine results or on social media as posts from individual users or RT/Sputnik accounts. 

As recognised in the request itself, the wider interpretation of the ban departs from the principle of prohibiting general monitoring obligations (Article 15, e-Commerce Directive). Simply put, this principle prohibits EU Member States from imposing obligations to monitor and filter all or most content on a platform to detect and prevent any unlawful activity in general. Surely, we should keep in mind that the Commission’s ban is a temporary measure in times of unprecedented crisis. Furthermore, the prohibition does not ultimately prevent the introduction of regulatory take-down obligations with regard to specific illegal content. However, suspending individual users’ content replicating any RT and Sputnik content may hardly be feasible without introducing general monitoring and risking profound interference with freedom of expression.

In view of the intended scope of the ban of RT and Sputnik, we should ask ourselves how to effectively counter Russian war propaganda without paving the way of content-focused approaches toward propaganda and disinformation. To grasp the complexities of such an approach, we will first take a closer look at how the current ban is circumvented.

RT and Sputnik are successfully circumventing the ban

Russian state-owned outlets and their audiences are actively attempting to test the limits of the ban through multiple workarounds and a variety of methods. RT and Sputnik staff accounts as well as other Russia state-owned outlets, such as RIA Novosti, continue to spread Russian propaganda on social media. Other media outlets are publishing verbatim copies of RT and Sputnik articles. For example, an article citing the Russian Foreign Ministry, published by ZAROnews on the 2 March 2022, was a copy of an article published by Sputnik a day earlier. It is unclear why they are reposting copies of their content. They may be seeking economic gain (e.g. increased website traffic that converts into advertising revenue) or else they may be ideologically aligned with RT and Sputnik. The original domain of RT DE, which is now inaccessible, began using other domain names that were almost identical. RT DE has also registered accounts on alternative platforms, including VK, Odysee, and Yandex. VK is mainly used for links to other platforms providing video content, while Odysee is used to link to copycat accounts on mainstream platforms. 

Another side effect of the ban has been an increase in its appeal to conspiracy groups, who interpret it as a sign that elites are seeking to conceal the truth. Even without any direct links to RT or Sputnik, supporters have been continuing to spread Russian propaganda on their own accord. For example, far-right and conspiracy theorist Telegram users have been sharing methods to circumvent RT DE’s ban, including the use of a VPN, the TOR browser, or DNS server switching. Other workarounds ISD has identified include Twitter accounts aligned with the Chinese Communist Party (CCP) that appear to be inauthentic, spreading and amplifying disinformation about the war.

These attempts to circumvent a suspension demonstrate that a complete ban of RT and Sputnik content would require further reaching restrictions on users’ freedom of expression, effectively ending the prohibition on general monitoring. Platforms would need to monitor not only suspended outlets and their content, but any messages and narratives that copy or reflect the meaning of prohibited content. In this context, it becomes necessary to reconsider the political reasoning behind this intervention before moving on to our policy recommendation.

Disinformation or war propaganda as justification?

Article 20(1) of the International Covenant on Civil and Political Rights (ICCPR) states that “any propaganda for war shall be prohibited by law.” On this basis, the European Commission’s Vice President and Commissioner for Values and Transparency, Věra Jourová, argued, “we all stand for freedom of speech but it cannot be abused to spread war propaganda.” 

In the regulation enacted to ban RT and Sputnik in the EU, it is said that the ban was imposed due to Russia’s “international campaign of media manipulation and distortion of facts in order to enhance its strategy of destabilisation of its neighbouring countries, the EU and its member states” and its “continuous and concerted propaganda actions”. It should be noted, however, that disinformation and propaganda spread by RT and Sputnik has involved content that is protected by freedom of speech – misleading, yet truthful information that is not illegal per se enjoys the protection of freedom of speech. In contrast, the propaganda of war, meaning advocating for aggressive war, is legally prohibited, even if the opinion in support of such a war were based only on entirely true information. The legal distinction between war propaganda and propaganda/disinformation is important not only to legally justify a ban of RT and Sputnik, but also to avoid motivating over-reaching content-based approaches to disinformation in general. Platforms should not be incentivised to introduce large-scale monitoring of harmful content that could be protected by freedom of expression laws. 

The challenge of balancing freedom of expression and preventing dissemination of harmful content can be seen in the Internet Corporation for Assigned Names and Numbers’ (ICANN) response to a request made by Ukrainian Deputy Prime Minister Mykhailo Fedorov to target Russia’s internet access. ICANN stated that far-reaching restrictions were not practical and risked fuelling repressive regimes, and instead broad and unimpeded access to the internet is needed to help users find reliable information and prevent propaganda and disinformation. 

Regulatory interventions and platform action should ensure an open and democratic online information environment that avoids broad-based censorship and removal, and instead tackle how platform design and systems promote the dissemination and amplification of propaganda and disinformation.

Toward systemic approaches: Ensuring transparency and accountability of online platforms 

The challenge of effective enforcement and the concerns over fundamental rights highlight the complexity of platform regulation when it comes to disinformation and propaganda. It is necessary to strictly enforce measures against war propaganda without disproportionately targeting individual content that may be protected by freedom of expression. Simultaneously, there is momentum to oblige platforms to make meaningful changes to their systems to reduce the widespread dissemination of disinformation and propaganda. 

While the current context of the war in Ukraine is yet another sign of the pressing need to address the dynamics of social media, governments should act knowing interventions could set a precedent for the future of digital policy and its legal interpretation. We have outlined recommendations for consideration and action by governments below.

Accountable decision-making 

To begin with, when introducing content-based regulatory interventions, governments should clarify what content and what types of services are in scope. The European Commission’s broad interpretation of ‘broadcasting’ in the regulation should be subject to parliamentary and public scrutiny. Although temporary in nature, such measures and subsequent take-down requests will be used as precedent in future debates.

Specifically, it is necessary to discuss what types of content and what kind of services are to be addressed, what potential thresholds (e.g. the size and/or nature of services) should apply, and how the rules can effectively be enforced. Moreover, potential repercussions for user rights and the freedom of expression need to be discussed more extensively. 

Despite the crisis situation and ongoing efforts of all stakeholders, far-reaching restrictive measures concerning what is legal and what is not must not be made behind the closed doors of governments and platforms without democratic oversight.  

Systemic risk management

In view of the upcoming Digital Services Act (DSA) and the associated updated version of the Code of Practice against Disinformation (CoPD), systemic approaches toward disinformation and propaganda, especially in crisis situations, need to be defined. While upholding the core principles of the e-Commerce regulation, online platforms must be held accountable to the risks their services pose.

  • Duties of care should oblige platforms to conduct systemic risk management, including the assessment and mitigation of algorithmic amplification, platform design features (such as the risks posed by limitless reshares) as well as any forms of manipulative behaviour. Systemic changes should be made globally, thereby countering the platforms’ linguistic gaps in content moderation in non-English speaking countries. 
  • Platforms should have the resources to implement transparent crisis protocols such as ad-hoc risk assessments in unforeseen extreme situations. 
  • Enhanced fact-checking, labelling and down-ranking of disinformation should help users globally to find factual information. 
  • Strong safeguards should ensure truly independent auditing of platforms without any conflict of interests and clear requirements. 
Meaningful transparency

A wide understanding of the reach and impact of war propaganda, as well as its distinction from disinformation and propaganda, is essential to be able to identify and promote the appropriate policy responses. Platforms should ensure robust and secure access to vetted researchers from civil society as well as journalists, not just academia.

  • Online platforms should establish license-free and easily accessible Application Platforming Interfaces (APIs) for research purposes. This means all functions of the platform that are public (and/or have a reasonable user expectation of visibility) are computationally transparent and accessible. 
  • APIs should allow researchers to analyse all content circulating on platforms, live and historical data, searchable by identifiers, in a machine-readable format. 
  • Vetted researchers would need a guarantee that platforms do not instrumentalise their Terms of Service to discourage privacy compliant public interest research. 
  • Enforcement of data access for researchers should not be subject to the economic interests of platforms. The possibility of introducing a public data trustee with a strong mandate should be discussed.

Systemic digital policy options that address the inherent risks of platforms’ systems and design features should be pursued ambitiously by the European Commission to tackle the spread and reach of disinformation and propaganda. In particular, civil society and the research community should be closely involved to guarantee a balanced, evidence-based policy approach. 

If we manage to use the momentum for meaningful and considered policy practices, regulatory interventions can play a decisive and positive role internationally in shaping a free, democratic and secure online information environment.

 

Sara Bundtzen is a Research and Policy Associate at ISD Germany.

Mauritius Dorn is a Digital Policy Fellow at ISD Germany.

Russia Ukraine war banner