24 July 2023
By Elise Thomas
For most people, the riots in France were a concerning sign of social tensions boiling over amid the complex legacies of systemic injustice, socioeconomic inequities and biased policing. But for anyone following the crisis on Twitter, you’d be forgiven for thinking it was a full-blown civil war driven by racial tensions.
For days, major hashtags on the riots such as #FranceRiots and #FranceOnFire were dominated by misinformation and far-right, racist tweets casting events on the ground as the beginnings of a literal civil war in France. Many framed this supposed war as the inevitable result of France’s immigration policies, calling for action from white nationalist vigilantes and urging far-right political leaders like Marine Le Pen and Eric Zemmour to step in.
While the context was different, in many ways the informational meltdown on Twitter resembled what happened barely a week before during the abortive Wagner mutiny attempt – an almost impenetrable morass of mis- and disinformation, propaganda, ill-informed self-aggrandisement and bad-faith actors trying to manipulate the narrative, with the active support of Twitter’s algorithms.
This state of affairs is the direct result of recent policy decisions by the platform. The role Twitter has come to play as a central node in the spheres of journalism, politics, finance and culture means that it has serious implications which ripple far beyond the platform itself. And here’s the kicker: we can’t even find out how bad the problem is.
Twitter is downranking expertise and boosting low-quality sources, misinformation and propaganda
Twitter’s algorithm currently boosts tweets from Twitter Blue accounts in the For You timeline and on hashtags. When Elon Musk took over, many warned that his changes to how Twitter verifies accounts would have serious implications for mis- and disinformation on the platform.
The cumulative effect of these changes has been that Twitter’s algorithm now, in the aggregate, downranks informed contributors (most of whom have not purchased Twitter Blue) in favour of Twitter Blue subscribers. Many of the people promoted by the algorithm at best have no idea what they’re talking about, and at worst have a malicious agenda.
An extremely clear example of this emerged on the #FranceRiots, #FranceHasFallen and #FranceOnFire hashtags. For days as the riots raged on in early July, these hashtags were dominated by implicitly or overtly racist and white nationalist tweets, as well as a seemingly endless tide of misinformation, and false or misleading videos. Twitter’s Community Notes worked only sporadically to address obviously false content (for example, one tweet of a misleading video may have a Community Note debunking it, but another tweet sharing the same video has no Note).
A particular beneficiary of the Twitter algorithm has been Paul Golding, the leader of the far-right party Britain First, who was jailed for hate crimes in 2018 and charged under the UK’s Terrorism Act in 2020. Golding was banned from Twitter in 2017 in connection with a platform policy against “accounts that affiliate with organisations that use or promote violence against civilians to further their causes.” Golding had his account reinstated under Musk and is now Twitter Blue Verified. On 15 July, Musk’s personal Twitter account liked tweets from Golding expressing anti-diversity sentiments.
Golding’s tweets about the riots in France using the #FranceRiots hashtag have been consistently boosted by Twitter’s algorithm, at times recommended as the top tweet or dominating the first dozen tweets Twitter opted to show to users who searched for that hashtag. Golding’s tweets are heavily, racially charged, for example referring to the riots as a ‘race war’ (a key concept in the white supremacist theory of accelerationism) or denigrating immigrants and black people.
In addition to this, many of the videos which Golding shared were clearly inaccurate – for example, one tweet in which he claimed a video showed protesters pushing cars off the roof of buildings was actually a clip from the movie The Fast & The Furious 8. In another example, Golding characterised a video of masked and armed men as “armed rioters show[ing] off their arsenal of weapons in France.” The video was actually from 2020 and not related to the current protests.
None of this prevented Twitter from algorithmically boosting Golding as a key source on the French riots to millions of users seeking information about the situation from the #FranceRiots hashtag. Similar dynamics, again boosting Golding, played out on the #FranceHasFallen and #FranceonFire hashtags.
Russian propaganda networks also appear to have been key drivers in the conversation around the French riots. These networks have been actively and enthusiastically sharing bogus content relating to the riots, in particular playing to racist and anti-immigrant sentiments— intersecting with the far-right’s efforts in a way which supports Russia’s broader interests.
To summarise: rather than amplifying journalists, experts or others who might have access to genuine information or valuable contextual knowledge to contribute, Twitter’s current algorithmic systems and policies have been promoting the views of a convicted criminal with extremist views, no particular knowledge of French politics and a clear racist agenda, who is using misinformation, Russian propaganda and hate speech to further his own goals.
This is not an isolated issue. It is a systemic platform problem, resulting directly from the policy decisions which have been made at Twitter since Musk took over.
Ordinarily, this is the bit where I would say ‘but don’t just take my word for it, here is the data to prove it.’ Unfortunately, that’s not possible anymore, which brings us on to the second problem.
Twitter is preventing independent research into what is happening on the platform
Twitter’s changes to API access, as well as sudden policy changes in a supposed effort to prevent data scraping by third-parties, have effectively broken many of the tools and methodologies used by external, public interest researchers to collect data on Twitter. Methods which may still be practically possible are nonetheless off the table for many in light of the new Developer Agreement and associated legal risks.
The handful of researchers who have continued to collect and analyse large-scale Twitter data have faced serious obstacles. Researcher Travis Brown, whose data on Twitter Blue signups was widely cited as some of the only such data available, had his Twitter account suspended in early July 2023. This comes at the same time as Twitter has significantly reduced its overall transparency.
This is just one example of how it is now very difficult for anyone outside Twitter (and, potentially, even for the remnants of the relevant teams inside Twitter) to conduct empirical, quantitative research about what is happening on the platform.
In the specific case of the French riots, this means that researchers are unlikely to be able to quantify the extent to which the extreme right has hijacked the Twitter conversation to propagandise for their own cause, and to what degree state-linked propaganda networks have helped to chivvy that conversation along. It will be difficult for anyone to independently assess why it may have been that Golding’s account in particular got such an algorithmic boost, and whether that was organic or the result of coordinated manipulation.
The larger context is that it has simultaneously become both easier for Twitter’s information space to be manipulated, and harder for independent researchers to detect and quantitatively assess large-scale manipulation.
While it may be tempting to dismiss this as a niche issue, the reality is that – at least for now – what happens on Twitter matters. The platform’s user base is small but disproportionately influential in driving media, money markets and political attention around the world. So far, despite a growing number of competitor platforms, network effects seem to be anchoring many users to Twitter.
In the years before Musk, Twitter had evolved into an industry leader among the social media platforms when it came to transparency and data access. This was as much a business decision as an ethical one; the company understood that its relevance, and therefore its ability to attract advertisers, hinged on fostering trust among its users and among other stakeholders including advertisers, regulators and policymakers.
It seems fair to conclude that Twitter’s policy changes under Musk have done little to engender that trust. If Twitter is going to remain relevant to the public conversation (and financially viable as a platform for advertising) they must turn that trajectory around. As Musk has said himself, “transparency builds trust.” A first and obvious step would be to reconsider the changes they have made to data access for researchers and restore API access at a reasonable cost to allow for greater transparency.