From rumours to riots: How online misinformation fuelled violence in the aftermath of the Southport attack

31 July 2024

In the aftermath of a mass stabbing in Southport, UK, misinformation about the identity of the attacker, who is a minor and therefore cannot be named until criminal prosecution is complete, spread quickly across social media. The following evening, a community-organised vigil for the victims was hijacked by far-right rioting, mobilised through anti-Muslim and anti-migrant narratives with no factual basis. This dispatch explores the events that unfolded, the spread of hateful speech on- and offline, and the part played by platforms’ business models, algorithms and content moderation.


On Monday, 29 July, a stabbing attack at a Taylor Swift-themed children’s dance party in Southport, Merseyside, killed three young girls at the time of writing and injured several more. The tragedy, one of the UK’s most significant mass-casualty events in recent history, is alleged to have been perpetrated by a local 17-year-old boy, whose motive is as yet unknown.

Self-described ‘news’ accounts rapidly spread falsehoods around the perpetrator. One viral narrative falsely named him as “Ali al-Shakati”, a Muslim migrant new to the UK. This was later debunked by the police. Nonetheless, false claims surrounding the attack quickly garnered millions of views online, galvanised by anti-Muslim and anti-migrant activists and promoted by platforms’ recommender systems. Far-right networks – a mix of formal groups and a broader ecosystem of individual actors – used this spike in activity to mobilise online, organising anti-Muslim protests outside the local mosque which later turned violent. Merseyside police have claimed that the English Defence League were responsible for the organisation of these protests, though EDL supporters were certainly not the only individuals present.

This Dispatch outlines the stark and rapid pipeline from online ‘misinformation clickbait’ to offline violence.

Misinformation spreads quickly after the stabbing attack

Soon after news of the stabbing came to light, anti-migrant and anti-Muslim narratives were seeded online. In a now-deleted post, one X user shared a screenshot of a LinkedIn post from a man who claimed to be the parent of two children present at the attack, in which he alleged that the attacker was a “migrant” and advocated for “clos[ing] the borders completely”. This X user appears the first to falsely assert that: 1) the attacker’s name was “Ali al-Shakati”; 2) he was on the “MI6 watch list” [this cannot be correct, as MI5 is the security agency responsible for domestic terrorism]; 3) he was “known to Liverpool mental health services”; and 4) he was “an asylum seeker who came to UK [sic] by boat last year”.

These false claims were then uncritically amplified by other X accounts claiming to be “news outlets”. A small account called ‘Channel3 Now’, whose website primarily contains material related to violent incidents, wrote the name “Ali al-Shakati” into an article which has since been deleted. ISD OSINT investigation suggests that a previous iteration of Channel3 Now’s website was run out of an address in Pakistan. Other reporting has suggested those who run the website may be based in Pakistan and/or the United States. Channel3 Now’s ‘reporting’ was then cited by a range of accounts including ‘End Wokeness’, which has 2.8 million followers.

The police did not confirm that the name was false until midday the following day. By 3pm the day after the attack, the false name had received over 30,000 mentions on X alone from over 18,000 unique accounts. As the alleged perpetrator is 17 and a minor under UK law, their name cannot legally be published until after legal proceedings have concluded.

Figure 1: Figure 1: An article on Channel3 Now was among the first to promote the false information about the perpetrator’s identity.

Algorithms amplify false information

The false name attributed to the attacker was circulated organically, but also recommended to users by platform algorithms.

On X, the false name of the attacker “Ali al-Shakati” featured as a ‘Trending in the UK’ topic, being suggested to users under the “What’s happening” sidebar. When searching for “Southport” on X, the top recommended search results included the account of actor-turned-political activist Laurence Fox. Earlier, Fox, who boasts more than half a million followers on X, had issued calls based on the false identification of the attacker “to permanently remove Islam from Britain. Completely and entirely”. His post has received more than 850k views on the platform.

Figures 2 and 3: Recommendation algorithms surfacing Laurence Fox’s account on X when searching for the term ‘Stockport’, and his post calling to “permanently remove Islam from Great Britain. Completely and entirely” in response to the attack.

Meanwhile on TikTok, search results for “Southport” recommended “Ali al-Shakati arrested in Southport” as a suggested query that “Others searched for”. Through these recommender algorithms, platforms therefore amplified misinformation to users who may not otherwise have been exposed, even after the police had confirmed the name was false.

Figure 4: An algorithmic recommendation for the false name of the perpetrator when searching for “Southport” on TikTok. This screenshot was taken approximately 9 hours after the police had confirmed the false nature of the name.

Figure 5: The false name of the perpetrator was trending on X and recommended to users in the “what’s happening” side bar. This screenshot was taken before the police had confirmed the false nature of the name.

Anti-Muslim and anti-Migrant users weaponise false information

The supposed perpetrator – as described across social media – was alleged to be Muslim and to have arrived in the UK in 2023 on a small boat. This false narrative echoes Islamophobic tropes that Muslims and migrants are disproportionately violent and associated with criminality. On X, the four most widely shared posts containing the fake name were from accounts which frequently promote anti-Muslim and anti-migrant narratives; these posts specifically mentioned his alleged religious beliefs. One post, viewed more than a million times at the point of analysis, stemmed from an account whose handle alleges that Europe is being ‘invaded’ and regularly posts content that portrays immigrants in a negative light, as well as focusing on the ethnicity of those arriving in the UK. Such content exemplifies the direct link between this disinformation narrative and the spread of anti-Muslim and/or anti-migrant conspiracies.

Figure 6: One post by an anti-Muslim account spreading the false name, Ali al-Shakati, which has been viewed more than 1.4 million times as of 31 July.

It was not long until calls for mass deportation of migrants and Muslims gained traction.

Online to offline: Far-right networks use social media to mobilise a protest

Far-right online networks began to organise multiple protests the day after the attack, by which point false information and anti-Muslim narratives were already widespread. Their activity took place alongside a wider, peaceful vigil organised by the community of Southport. The far-right protest was due to take place on St Luke’s Road, next to the scene of the attack and home to the Southport Mosque.

One TikTok account created specifically for the protest posted multiple videos calling for support, which received tens of thousands of views on 30 July. The videos included symbols used by far-right movements and called for “mass deportation”. Despite directly inciting hatred, these videos remained online and were not removed by the platform, even after violence erupted at the physical rally.

Figure 7: A TikTok account set up to mobilise anti-migrant protests in response to the Southport stabbing.

Screenshots from these TikTok videos were shared across social media platforms, including in dedicated Telegram chats, where they were reposted into other channels.

Figure 8: Screenshot of a TikTok video calling for the protest and depicting a man wearing a far-right symbol, shared to Telegram.

The protest was also promoted on X. Although the imagery used was less explicitly violent, accounts directly linked the protest to mis- and disinformation about the attacker’s identity. For example, one user with more than 16,000 followers and X Premium status shared a protest poster, claiming that “children are being slaughtered at the alter [sic] of uncontrolled mass migration” and that “Open Borders advocates have blood on their hands”.

Figure 9: A graphic posted to X promoting the protest and referring to the attack using anti-migrant sentiment.

Protestors on the ground attacked the local mosque and clashed with police units, chanting “Allah, Allah, who the f*** is Allah” and “we want our country back”. A livestream of the riot was posted to YouTube, with commenters posting their support in real time. 16 hours after posting, it has nearly 200,000 views.

Figure 10: A screenshot of a YouTube livestream of the riot.

Platform Responses

Platforms have developed crisis response protocols for responding to terrorist and mass-casualty events. However, they continue to struggle to respond to the broader challenge of viral misinformation and how this can rapidly translate into hate or calls for violence. Where relevant policies do exist, this scenario demonstrates a lack of effective enforcement or an understanding of the real-world impacts of misinformation.

For example, X’s Hateful Conduct policy prohibits “inciting behaviour that targets individuals or groups of people belonging to protected categories” with explicit reference to “inciting fear or spreading fearful stereotypes about a protected category, including asserting that members of a protected category are more likely to take part in dangerous or illegal activities.” Much of the content analysed by ISD suggested that Muslims or migrants are particularly prone to criminality, and therefore appears to be in contravention of this policy.

While TikTok’s Community Guidelines prohibit direct incitement of violence, this event shows how users can weaponise borderline content to drive real societal harms. However, the mis- and disinformation surrounding the supposed perpetrator arguably violates TikTok’s Terms of Service, specifically its Integrity and Authenticity Policies. Following the police statement regarding the false name, content circulating could decisively be labelled as mis- or disinformation. TikTok clearly states that unverified information in emergencies should not go unchecked on the platform. By extension, it should therefore have used fact-checking and other content moderation tools to more accurately enforce its Terms of Service and mitigate the harms generated by the spread of mis- and disinformation.

Conclusion

The violence witnessed in Southport exemplifies the real-world consequences of viral, unchecked misinformation on social media.

The information vacuum in the immediate aftermath of the attack allowed cynical actors to seize on the tragedy and spread hateful narratives. Narratives which sought to promote the targeting of Muslims and migrants quickly took root, with speculation about the ethnicity and religion of the attacker immediately spreading online. Self-described ‘breaking news’ accounts and content aggregators, some of which are monetised on X, are incentivised to spread sensationalist details to garner engagement, regardless of veracity. Most concerningly, there appear to be no consequences if these ‘facts’ are later proven false.

Once the false name and details were posted, they became a rallying point for anti-migrant and anti-Muslim sentiment. Posts quoting disinformation were algorithmically amplified, spreading them to a wider audience. An ecosystem of far-right and radical-right[i] accounts, whose core focus is anti-Muslim and anti-migrant hatred, were instrumental in pushing the disinformation further.

The British far right demonstrated its capacity to mobilise at short notice in response to the incident, even in the absence of any verified information about the identity or motives of the attacker. What was supposed to be a vigil for victims was hijacked by violence, with more than 50 police officers reported injured. Far-right accounts blamed the government and other institutions for the violence, claiming their attacks were a legitimate response to perceived uncontrolled migration.

The events in Southport also present problems for platforms where users have circulated the (false) name of an alleged criminal who is a minor. Under-18s accused of crimes in the UK are not allowed to be named, whether by police or press, until they reach the age of 18 or criminal proceedings are concluded. However, there is more ambiguity as to whether members of the public can reveal the name of an accused minor, or whether platforms have an obligation to prevent names being disclosed. In this case, the name that was circulated on social media was false; this was only confirmed by police after it had become a mobilising cause for anti-migrant and anti-Muslim activism.

As a case study, this event suggests a gap in platforms’ detection of, and response to, rapidly emerging harms. Platforms should actively moderate content revealing the name of an accused minor, regardless of whether that name is incorrect. They should also highlight factual information to users around why names are not disclosed under UK law. Existing platform policies may cover situations where an incorrect or fake name constitutes misinformation that could incite hatred or violence. However, this does not account for situations in which the name is correct but should not have been released under reporting restrictions.

More broadly, this incident clearly demonstrates how viral disinformation can fuel violence, harassment and hate in the wake of a tragic event. Those spreading misinformation may well have benefited from a lack of moderation, garnering millions of clicks and views on spurious content which served to inflame community tensions in the UK.

Endnotes

[i] There is no agreed definition of what constitutes the ‘extreme right’. A widely accepted definitional minimum of ‘far right’ created by Cas Mudde identifies five core elements common to the majority of definitions: strong-state values, nationalism, xenophobia, racism and anti-democracy. Within the broad framework of ‘far right’, Mudde identifies two branches, the radical right and the extreme right, which are differentiated by attitudes towards strategic political violence (typically supported by the extreme right) and democracy (while the extreme right rejects all forms of democracy, the radical right opposes liberal democracy while working within democratic frameworks).

Investigation: Political violence, harassment, intimidation & threats during Ireland’s 2024 general election campaign

30 November 2024 A joint investigation by the Institute for Strategic Dialogue (ISD) and Hope and Courage Collective (H&CC) documented 55 incidents encompassing politically motivated violence, threats, harassment, targeting and smears across a spectrum of activity in the five weeks leading up to the Irish General Election on 29 November. These included 4 incidents of offline violence; 13 incidents related ...