From rumours to riots: How online misinformation fuelled violence in the aftermath of the Southport attack
31 July 2024
In the aftermath of a mass stabbing in Southport, UK, misinformation about the identity of the attacker, who is a minor and therefore cannot be named until criminal prosecution is complete, spread quickly across social media. The following evening, a community-organised vigil for the victims was hijacked by far-right rioting, mobilised through anti-Muslim and anti-migrant narratives with no factual basis. This dispatch explores the events that unfolded, the spread of hateful speech on- and offline, and the part played by platforms’ business models, algorithms and content moderation.
On Monday, 29 July, a stabbing attack at a Taylor Swift-themed children’s dance party in Southport, Merseyside, killed three young girls at the time of writing and injured several more. The tragedy, one of the UK’s most significant mass-casualty events in recent history, is alleged to have been perpetrated by a local 17-year-old boy, whose motive is as yet unknown.
Self-described ‘news’ accounts rapidly spread falsehoods around the perpetrator. One viral narrative falsely named him as “Ali al-Shakati”, a Muslim migrant new to the UK. This was later debunked by the police. Nonetheless, false claims surrounding the attack quickly garnered millions of views online, galvanised by anti-Muslim and anti-migrant activists and promoted by platforms’ recommender systems. Far-right networks – a mix of formal groups and a broader ecosystem of individual actors – used this spike in activity to mobilise online, organising anti-Muslim protests outside the local mosque which later turned violent. Merseyside police have claimed that the English Defence League were responsible for the organisation of these protests, though EDL supporters were certainly not the only individuals present.
This Dispatch outlines the stark and rapid pipeline from online ‘misinformation clickbait’ to offline violence.
Misinformation spreads quickly after the stabbing attack
Soon after news of the stabbing came to light, anti-migrant and anti-Muslim narratives were seeded online. In a now-deleted post, one X user shared a screenshot of a LinkedIn post from a man who claimed to be the parent of two children present at the attack, in which he alleged that the attacker was a “migrant” and advocated for “clos[ing] the borders completely”. This X user appears the first to falsely assert that: 1) the attacker’s name was “Ali al-Shakati”; 2) he was on the “MI6 watch list” [this cannot be correct, as MI5 is the security agency responsible for domestic terrorism]; 3) he was “known to Liverpool mental health services”; and 4) he was “an asylum seeker who came to UK [sic] by boat last year”.
These false claims were then uncritically amplified by other X accounts claiming to be “news outlets”. A small account called ‘Channel3 Now’, whose website primarily contains material related to violent incidents, wrote the name “Ali al-Shakati” into an article which has since been deleted. ISD OSINT investigation suggests that a previous iteration of Channel3 Now’s website was run out of an address in Pakistan. Other reporting has suggested those who run the website may be based in Pakistan and/or the United States. Channel3 Now’s ‘reporting’ was then cited by a range of accounts including ‘End Wokeness’, which has 2.8 million followers.
The police did not confirm that the name was false until midday the following day. By 3pm the day after the attack, the false name had received over 30,000 mentions on X alone from over 18,000 unique accounts. As the alleged perpetrator is 17 and a minor under UK law, their name cannot legally be published until after legal proceedings have concluded.
Algorithms amplify false information
The false name attributed to the attacker was circulated organically, but also recommended to users by platform algorithms.
On X, the false name of the attacker “Ali al-Shakati” featured as a ‘Trending in the UK’ topic, being suggested to users under the “What’s happening” sidebar. When searching for “Southport” on X, the top recommended search results included the account of actor-turned-political activist Laurence Fox. Earlier, Fox, who boasts more than half a million followers on X, had issued calls based on the false identification of the attacker “to permanently remove Islam from Britain. Completely and entirely”. His post has received more than 850k views on the platform.
Meanwhile on TikTok, search results for “Southport” recommended “Ali al-Shakati arrested in Southport” as a suggested query that “Others searched for”. Through these recommender algorithms, platforms therefore amplified misinformation to users who may not otherwise have been exposed, even after the police had confirmed the name was false.
Anti-Muslim and anti-Migrant users weaponise false information
The supposed perpetrator – as described across social media – was alleged to be Muslim and to have arrived in the UK in 2023 on a small boat. This false narrative echoes Islamophobic tropes that Muslims and migrants are disproportionately violent and associated with criminality. On X, the four most widely shared posts containing the fake name were from accounts which frequently promote anti-Muslim and anti-migrant narratives; these posts specifically mentioned his alleged religious beliefs. One post, viewed more than a million times at the point of analysis, stemmed from an account whose handle alleges that Europe is being ‘invaded’ and regularly posts content that portrays immigrants in a negative light, as well as focusing on the ethnicity of those arriving in the UK. Such content exemplifies the direct link between this disinformation narrative and the spread of anti-Muslim and/or anti-migrant conspiracies.
It was not long until calls for mass deportation of migrants and Muslims gained traction.
Online to offline: Far-right networks use social media to mobilise a protest
Far-right online networks began to organise multiple protests the day after the attack, by which point false information and anti-Muslim narratives were already widespread. Their activity took place alongside a wider, peaceful vigil organised by the community of Southport. The far-right protest was due to take place on St Luke’s Road, next to the scene of the attack and home to the Southport Mosque.
One TikTok account created specifically for the protest posted multiple videos calling for support, which received tens of thousands of views on 30 July. The videos included symbols used by far-right movements and called for “mass deportation”. Despite directly inciting hatred, these videos remained online and were not removed by the platform, even after violence erupted at the physical rally.
Screenshots from these TikTok videos were shared across social media platforms, including in dedicated Telegram chats, where they were reposted into other channels.
The protest was also promoted on X. Although the imagery used was less explicitly violent, accounts directly linked the protest to mis- and disinformation about the attacker’s identity. For example, one user with more than 16,000 followers and X Premium status shared a protest poster, claiming that “children are being slaughtered at the alter [sic] of uncontrolled mass migration” and that “Open Borders advocates have blood on their hands”.
Protestors on the ground attacked the local mosque and clashed with police units, chanting “Allah, Allah, who the f*** is Allah” and “we want our country back”. A livestream of the riot was posted to YouTube, with commenters posting their support in real time. 16 hours after posting, it has nearly 200,000 views.
Platform Responses
Platforms have developed crisis response protocols for responding to terrorist and mass-casualty events. However, they continue to struggle to respond to the broader challenge of viral misinformation and how this can rapidly translate into hate or calls for violence. Where relevant policies do exist, this scenario demonstrates a lack of effective enforcement or an understanding of the real-world impacts of misinformation.
For example, X’s Hateful Conduct policy prohibits “inciting behaviour that targets individuals or groups of people belonging to protected categories” with explicit reference to “inciting fear or spreading fearful stereotypes about a protected category, including asserting that members of a protected category are more likely to take part in dangerous or illegal activities.” Much of the content analysed by ISD suggested that Muslims or migrants are particularly prone to criminality, and therefore appears to be in contravention of this policy.
While TikTok’s Community Guidelines prohibit direct incitement of violence, this event shows how users can weaponise borderline content to drive real societal harms. However, the mis- and disinformation surrounding the supposed perpetrator arguably violates TikTok’s Terms of Service, specifically its Integrity and Authenticity Policies. Following the police statement regarding the false name, content circulating could decisively be labelled as mis- or disinformation. TikTok clearly states that unverified information in emergencies should not go unchecked on the platform. By extension, it should therefore have used fact-checking and other content moderation tools to more accurately enforce its Terms of Service and mitigate the harms generated by the spread of mis- and disinformation.
Conclusion
The violence witnessed in Southport exemplifies the real-world consequences of viral, unchecked misinformation on social media.
The information vacuum in the immediate aftermath of the attack allowed cynical actors to seize on the tragedy and spread hateful narratives. Narratives which sought to promote the targeting of Muslims and migrants quickly took root, with speculation about the ethnicity and religion of the attacker immediately spreading online. Self-described ‘breaking news’ accounts and content aggregators, some of which are monetised on X, are incentivised to spread sensationalist details to garner engagement, regardless of veracity. Most concerningly, there appear to be no consequences if these ‘facts’ are later proven false.
Once the false name and details were posted, they became a rallying point for anti-migrant and anti-Muslim sentiment. Posts quoting disinformation were algorithmically amplified, spreading them to a wider audience. An ecosystem of far-right and radical-right[i] accounts, whose core focus is anti-Muslim and anti-migrant hatred, were instrumental in pushing the disinformation further.
The British far right demonstrated its capacity to mobilise at short notice in response to the incident, even in the absence of any verified information about the identity or motives of the attacker. What was supposed to be a vigil for victims was hijacked by violence, with more than 50 police officers reported injured. Far-right accounts blamed the government and other institutions for the violence, claiming their attacks were a legitimate response to perceived uncontrolled migration.
The events in Southport also present problems for platforms where users have circulated the (false) name of an alleged criminal who is a minor. Under-18s accused of crimes in the UK are not allowed to be named, whether by police or press, until they reach the age of 18 or criminal proceedings are concluded. However, there is more ambiguity as to whether members of the public can reveal the name of an accused minor, or whether platforms have an obligation to prevent names being disclosed. In this case, the name that was circulated on social media was false; this was only confirmed by police after it had become a mobilising cause for anti-migrant and anti-Muslim activism.
As a case study, this event suggests a gap in platforms’ detection of, and response to, rapidly emerging harms. Platforms should actively moderate content revealing the name of an accused minor, regardless of whether that name is incorrect. They should also highlight factual information to users around why names are not disclosed under UK law. Existing platform policies may cover situations where an incorrect or fake name constitutes misinformation that could incite hatred or violence. However, this does not account for situations in which the name is correct but should not have been released under reporting restrictions.
More broadly, this incident clearly demonstrates how viral disinformation can fuel violence, harassment and hate in the wake of a tragic event. Those spreading misinformation may well have benefited from a lack of moderation, garnering millions of clicks and views on spurious content which served to inflame community tensions in the UK.
Endnotes
[i] There is no agreed definition of what constitutes the ‘extreme right’. A widely accepted definitional minimum of ‘far right’ created by Cas Mudde identifies five core elements common to the majority of definitions: strong-state values, nationalism, xenophobia, racism and anti-democracy. Within the broad framework of ‘far right’, Mudde identifies two branches, the radical right and the extreme right, which are differentiated by attitudes towards strategic political violence (typically supported by the extreme right) and democracy (while the extreme right rejects all forms of democracy, the radical right opposes liberal democracy while working within democratic frameworks).