Evidencing a rise in anti-Muslim and anti-migrant online hate following the Southport attack

ISD & CASM Technology

3 September 2024

Following the Southport attack and subsequent widespread far-right riots, this article quantifies patterns of growth of anti-Muslim and anti-migrant hate on Telegram and X.


Far-right[1] riots in the UK in the aftermath of the Southport attack resoundingly demonstrated the role of social media in facilitating the viral spread of misinformation, hate and extremist mobilisations. The riots did not appear from nowhere but occurred against the backdrop of widespread anti-Muslim and anti-migrant prejudice on social media. Across both large mainstream platforms such as X and more fringe messaging services such as Telegram, ISD evidenced how users and channels with large followings shared harmful stereotypes about Muslims and migrants in the immediate aftermath of the attack.

This article quantifies the growth of hate on both Telegram and X following the Southport attack. Leveraging bespoke automated detection software at a large scale, it shows both a rise in hateful online content and provides an initial analysis of the narratives used to spread it on both Telegram and X. Analysts also investigate the hashtag #TwoTierKeir – a hashtag suggesting that Prime Minister Starmer supports a two-tier policing system – as an example of borderline content, which was used to spread anti-Muslim and anti-migrant stereotypes in a covert manner, thereby reaching huge audiences.

Key Findings

To assess the volume of online hate directed at Muslims and migrants in response to the Southport attack, ISD and CASM collected nearly 45,000 messages across 55 British far-right Telegram channels, selected based on existing monitoring work. Analysts found that:

  • In the ten days following the Southport attack, anti-migrant hate on Telegram rose by 246% and anti-Muslim hate by 276%.
    • Calls for violence against Muslim and migrant communities peaked on 4 and 5 August.
  • On X, the use of anti-Muslim slurs more than doubled in the ten days after the Southport attack, with over 40,000 posts containing one or more of these terms.
    • Anti-Muslim sentiment was also spread on X via hashtags, which collectively received almost five million views.
  • The first mentions of #TwoTierKeir on X were made by an account with under 1,000 followers on 31 July in response to Keir Starmer’s statement that far-right rioters would face serious law enforcement consequences.
    • Between 31 July and 8 August, the hashtag was mentioned nearly 45,000 times by over 12,000 unique authors and seen over 100 million times.

Anti-Muslim and anti-migrant hate tripled on Telegram following the Southport attacks

Methods

Analysts sought to understand the toxicity of digital ecosystems by measuring changes in the volume of anti-Muslim and anti-migrant hate following the Southport attack on 29 July and subsequent riots in cities across the UK.

In order to conduct analysis on Telegram, ISD and CASM collected 45,000 messages across 55 British far-right Telegram channels from 19 July to 8 August (ten days before the Southport attack to ten days after). An initial list of channels was identified using existing seed lists and ethnographic work, which included a process of account expansion and data cleaning. Channels were only included when based in the UK, which was assessed either by location of known individuals, channel names, or where the vast majority of content related to UK-based events. Channels were also selected for the relevance to far-right extremism based on existing ethnographic research. They ranged from white supremacism to football hooliganism to accounts which supported known far-right groups and networks, such as support for Tommy Robinson.

Channels included both broadcast channels and chat groups, with the latter generating a greater volume of activity due to greater interactivity between users. The vast majority of messages (76%) were posted on a chat group linked to Tommy Robinson, Britain First and the For Britain movement. In terms of their predominant interest, Telegram channels ranged from football hooliganism, to far-right anti-Muslim extremism, to white nationalist groups.

Hate was measured against ISD’s definition as ‘activity which seeks to dehumanise, demonise, harass, threaten, or incite violence against an individual or community based on religion, ethnicity, race, sex, gender identity, sexual orientation, disability, national origin or migrant status’. This was understood in the context of the existing tropes and conspiracies which constitute Islamophobia/anti-Muslim hatred.

To identify hate within the dataset, keyword filters built on previous analysis by ISD and CASM Technology were applied to the Telegram messages collected to detect mentions and references to Muslims and migrants. This filter included neutral identifying words as well as hateful phrases and slurs. This process yielded 2,758 messages likely related to Muslims and 2,674 messages likely related to migrants. A generative large language model was then used to classify these messages based on the presence of anti-Muslim and anti-migrant sentiment.

The model classified messages for anti-Muslim or anti-migrant sentiment, assigning a confidence score. Only those with a score of 80% or higher were analysed. A sample of these was manually reviewed, confirming they contained anti-Muslim or anti-migrant sentiment, or both.

Findings  

A total of 1,892 hateful messages were identified across the analysed channels between 29 July and 8 August, including 983 anti-migrant messages and 1,014 anti-Muslim messages, with some posts falling under both categories. In the ten days following the Southport attack, anti-migrant messages rose by ​246% and anti-Muslim hate messages by 276% compared to the previous ten days (19 July – 28 July). The graph shows a small peak in the immediate aftermath of the Southport attack, with a larger secondary peak over the weekend when the worst rioting was experienced. ​​ 

 

Qualitative analysis of anti-Muslim and anti-migrant hate speech on Telegram reveals a disturbing pattern of interconnected racism, xenophobia, and conspiracy theories that fuel far-right extremist rhetoric on the platform. Migrants and Muslims are frequently dehumanised, labelled as “dirty” or “boat people”, and associated with criminality. These messages often link to broader conspiracies, such as a deliberate agenda to “import” the “worst” types of Muslims. and conspiracy theories about population replacement. Such ideas are often inherently connected to antisemitism, where Jewish people are accused of deliberately orchestrating such events, such as in the case of COVID-19. 

As unrest escalated in early August, the nature of the hate speech intensified and channels saw a surge in calls for mass deportations and violent rhetoric. Repeated ​​copy-paste messages explicitly advocating for the burning of mosques and the killing of Muslims peaked on 4 and 5 August. Egregiously anti-Black racist comments, at times referencing the Southport murder suspect, proliferated. This uptick in hate speech coincided with the most widespread riots, reflecting how online discourse can rapidly mirror and exacerbate real-world violence.  

The messaging on the analysed Telegram channels illustrates the platform’s role as an amplifier of harmful ideologies, evident in the progression from initial expressions of hate speech to outright incitement to violence with very little visible intervention from the platform. Telegram not only allows these extremist narratives to flourish but also facilitates their rapid spread, underscoring the urgent need for effective monitoring and intervention to prevent further escalation and potential offline harm.  

After Southport, posts containing anti-Muslim slurs doubled on X   

Methods 

A parallel study on the volume of hate on X indicates that this rise in anti-Muslim hate was also acute in mainstream online spaces. Analysis of X data used a list of anti-Muslim slurs and hateful phrases with a high likelihood of relating to hateful content. This keyword list, used in previous research to analyse anti-Muslim hate, comprised 96 terms. 

Findings 

In the ten days following the Southport attack, over twice as many (118% increase) anti-Muslim slurs were posted on X than in the previous period (over 40,000 total). This is likely to be an underestimate of the total volume of anti-Muslim hate on social media due to the implicit and coded nature of many comments. The spike in anti-Muslim slurs observed on 24 July likely correlates to the Manchester airport incident, which was regularly discussed at this time on far-right channels.  

 

Anti-Muslim hashtags were also used to spread hate on X. In the ten days following the attack, the hashtags #banislam, #stopislam, #fuckislam, #islamistheproblem, #islamiscancer, #deportmuslims and #deport_muslims were posted on X nearly 5,000 times in total. These hashtags were seen by X users nearly five million times during this period.   

#TwoTierKeir was used to popularise existing far-right conspiracies  

As violence grew, the X hashtag #TwoTierKeir promoted the idea that rioters were being unfairly targeted. The allegation of two-tier policing relates to the longstanding far-right narrative that Muslim communities receive preferential treatment by law enforcement or other national bodies. This in turn feeds the idea that white communities are being disadvantaged not only by Muslim and migrant communities, but by the complicity of liberal democratic institutions. Carefully selected videos were shared on far-right Telegram channels of police responses to protests both by the far-right and by those of diverse ethnic backgrounds, attempting to show an unduly harsh response to white people.  

While this hashtag is not overtly hateful, its implication refers to existing anti-Muslim and anti-migrant prejudices. It therefore may not specifically contravene platform Terms of Service but serves to normalise and further entrench anti-Muslim conspiracies into mainstream discourse. Here, platforms should approach moderation protocols with an understand of the nuances of extremist content. In line with the Online Safety Act’s approach, large platforms must consider the risks of covertly hateful content to users. This should include increased resources and attention to hashtags that are likely to elicit hateful replies. Platforms should also ensure that they are not promoting such hashtags to new audiences through their recommender algorithms, even unintentionally.  

 

The first mentions of #TwoTierKeir were made by a small account with under 1,000 followers on 31 July in response to Keir Starmer’s statement that far-right rioters would face serious consequences from law enforcement. The hashtag slowly grew organically over the following period. Between 31 July and 8 August, the hashtag was mentioned nearly 45,000 times by over 12,000 unique authors, including by highly followed and verified accounts belonging to public figures. The hashtag was seen over 100 million times during this eight-day period.  

Conclusions  

Across both Telegram and X, this research has evidenced significant spikes in anti-Muslim and anti-migrant hate in the aftermath of the Southport attack. This took the form of both overt slurs and hashtags, and more covert conspiracies. Such borderline content may not be immediately recognisable to mainstream users as hateful, but nevertheless serves to amplify far-right conspiracies. This type of content on mainstream platforms contributes to the continued mainstreaming and normalisation of the othering of Muslim and migrant communities.  

X’s Hateful Conduct policy bans the spread of hate based on protected characteristics. However, some of this content is likely to breach these policies but was not subject to effective moderation. Overt anti-Muslim hate spread widely and to large audiences in the aftermath of the Southport attack. X should urgently consider the efficacy of its policies and their enforcement. Such policies must also contend with borderline hashtags such as #TwoTierKeir which may not explicitly contravene Terms of Service but contribute to the overall online hostile environment towards vulnerable communities.  

Telegram has long functioned as a permissive environment for various forms of hate and extremism. The European Commission is considering labelling Telegram a Very Large Online Platform (VLOP), which would strengthen legal mechanisms for accountability. This article’s findings of the significant spike in anti-Muslim and anti-migrant hate provide yet another piece of evidence for urgently bringing Telegram under digital regulation efforts at both EU and UK level.  

Endnotes

[1] There is no agreed definition of what constitutes the ‘extreme right’. A widely accepted definitional minimum of ‘far right’ created by Cas Mudde identifies five core elements common to the majority of definitions: strong-state values, nationalism, xenophobia, racism and anti-democracy. Within the broad framework of ‘far right’, Mudde identifies two branches, the radical right and the extreme right, which are differentiated by attitudes towards strategic political violence (typically supported by the extreme right) and democracy (while the extreme right rejects all forms of democracy, the radical right opposes liberal democracy while working within democratic frameworks).

The sole responsibility for any content supported by the European Media and Information Fund lies with the author(s) and it may not necessarily reflect the positions of the EMIF and the Fund Partners, the Calouste Gulbenkian Foundation and the European University Institute.