11th November 2021
By Elise Thomas
‘Sophisticated’ is a word which is often bandied about in discussions of social media influence operations. However, as highlighted by ISD’s latest investigation into pro-Russian disinformation outlet News Front, sometimes the actors behind these operations don’t need to be especially sophisticated. They just need to be persistent.
Despite Facebook’s claims of continuous enforcement, it appears that some well-known actors are able to return to the platform again and again using largely the same tactics.
News Front is a Crimea-based organisation which has been sanctioned by the United States for its role in spreading pro-Russian disinformation. News Front’s operation was first removed by Facebook in April 2020 for violating policies on foreign interference and coordinated inauthentic behaviour. However, in early 2021, reports including ISD’s The Long Tail of Influence Operations: A Case Study on News Front, and by the Alliance for Securing Democracy and EUvsDisinfo exposed News Front’s ongoing and active presence on Facebook.
These reports often highlighted the specific tactics News Front was using to evade Facebook’s content moderation efforts, in particular the use of link cloaking and mirror domains. Facebook responded by removing identified accounts and banning identified mirror domains.
As ISD’s latest investigation found, News Front resurfaced on Facebook just months later, with no notable innovations in its tactics. Once again, link cloaking and mirror domains were both being used to smuggle News Front content onto the platform. Some of these mirror domains are almost identical to those previously suspended.
This echoes a pattern noted with other disinformation and influence operations. For example, the actor or group of actors sometimes referred to as Spamouflage, which engages in distinctive pro-Chinese government campaigns, has been exposed again and again and again. Each time, a similar process ensues: social media platforms ban the accounts identified by researchers, sometimes uncovering others in the process. Over the following weeks the campaign replenishes its stock of accounts, which appear to be bought in bulk from commercial suppliers. These accounts are once again deployed to produce similar (or often exactly the same) content and disseminate it using similar methods until the operation is exposed again, and the cycle repeats.
To date, neither News Front nor Spamouflage have responded to repeated bans and removals with significant innovation. They’re not employing sophisticated or novel tricks. They’re just doing the same thing over again – and evidently, it works.
Ongoing enforcement is a foundational principle which underpins online content moderation. In monthly reports on coordinated inauthentic behaviour on its platform, Facebook states that it “[monitors] for efforts to re-establish a presence on Facebook by networks we previously removed. Using both automated and manual detection, we continuously remove accounts and Pages connected to networks we took down in the past.”
News Front and Spamouflage are both known entities for Facebook. Despite multiple previous suspensions and recent sanctions levelled against News Front by the US government, the network has been able to return to the platform multiple times using essentially the same tactics as it did previously.
If Facebook’s ‘continuous enforcement’ is insufficient to keep even well-researched and operationally consistent actors at bay, questions need to be asked about how effective that approach really is.
It’s not clear what the balance currently is between automated and manual investigations in Facebook’s current continuous moderation efforts. It is understandable why Facebook would want to maintain a level of opacity around this in order to prevent bad actors from gaming the system. However, sharing more details with legitimate researchers and partners about how Facebook’s continuous enforcement works in practice would be a useful step.
Given Facebook’s policies around data scraping, the researchers who continue to re-discover these actors are working largely manually. If they are able to isolate this activity by working manually and Facebook is not, that may suggest that Facebook’s balance between manual and automated approaches needs some adjustment. Preventing the resurgence of these actors may require more manual enforcement.
It would not be reasonable to expect Facebook to anticipate every genuinely sophisticated, novel strategy which bad actors might employ.
It is reasonable, however, to expect that lessons would be learned and applied to prevent the same actors from using the same strategies to re-establish largely the same operations over and over again.
What this investigation and much of the previous research highlight is the gulf between exposure and meaningful, lasting enforcement. Measures around greater transparency and cooperation with external researchers, while valuable, can only do so much. Independent research can help shine a spotlight on the problems, but ultimately only Facebook can take action to address them.
Simply put, Facebook needs to continue to enforce its own rules even after the spotlight of public attention moves on. Failure to do so makes a mockery of the entire content moderation effort.
Elise Thomas is an OSINT Analyst at ISD. She has previously worked for the Australian Strategic Policy Institute, and has written for Foreign Policy, The Daily Beast, Wired and others.