After Southport: Policy responses to far-right extremism

15 August 2024

After the tragic murder of three girls in Southport, the UK has seen the largest far-right extremist mobilisations in recent years. This article considers some of the longer-term policy implications emerging from the riots. It outlines the need to: adopt various approaches to prevent far-right violence; renew the cross-governmental counter extremism strategy; and strengthen regulatory responses to social media platforms.  


Key recommendations

Updating responses to the evolving far-right violent extremist ecosystem

  • Policy responses must reflect the amorphous and decentralised nature of the contemporary far right, which has evolved from groupcentric models.
  • Address the diverse drivers of extremism beyond ideology, including social isolation, a search for community and a more general support for violence
  • Strategies must respond to the increasing hybridisation of violent extremism, targeted hate and online misinformation.

Investing in a long-term counter-extremism strategy

  • A renewed counterextremism effort is required to address to the drivers of violence, encompassing digital literacy, social cohesion and resilience building efforts.
  • Improved crossgovernment coordination with leadership from No. 10 should bridge work across all departments whose work is relevant to the diverse drivers and impacts of extremism, including Health, Education, Security and Communities.
  • This structure should be mirrored on a local level, including through meaningful engagement with communities.

Ensuring robust regulatory responses

  • We commend the Government’s commitment to review the effectiveness of the UK’s Online Safety Act once it has been fully implemented, rather than disrupt the current progress being made by Ofcom to ensure it can be enforced from 2025 onwards. However, we do believe there are some key gaps in the current legislation that could be addressed in the short term, for example in researcher data access and crisis response, without significant disruption to the current timeline.
  • The crossplatform ecosystem underpinning farright extremist violence highlights the need for effective regulation of not only large platforms, but also highrisk smaller and mediumsized services (e.g. Telegram). Ofcom must ensure that they use the full range of powers available under the OSA to effectively regulate these types of platforms.
  • Legally mandated platform data access for researchers is urgently required to enable the identification of online harms, the independent scrutiny of platforms responses, and eventually will be essential to understand the effectiveness and impact of the UK’s OSA during any future review.

Updating responses to the evolving far-right violent extremism ecosystem

The violent August 2024 riots in the UK were organised across an interconnecting web of far-right online networks with no central leader or group structure, emblematic of the grassroots and decentralised nature of the contemporary far right. In recent years, policy responses originally designed to counter the top-down threat from Islamist terrorist groups such al-Qaeda, and later the Islamic State, have struggled to respond to the evolving threat landscape. The recent designation of the Terrorgram Collective, a network of violent far-right Telegram channels, shows intent to address this trend. However, rather than bending existing rigid structures to fit new threats, a new framework for response to this more amorphous extremist violence is needed.

For example, rioting was originally attributed to the English Defence League (EDL), a group which has not formally existed for over a decade but whose activists continue to agitate across loosely-organised online networks. Calls to designate the EDL as an organised terrorist group fail to reflect the fluid nature of this movement and are likely to have limited impact. Instead, approaches to such ‘post-organisational’ threats would likely benefit from focusing more on disruption than designation.

Diversification of ideological drivers 

The decentralisation of extremist networks also speaks to their ideological diversity. Rioters were not a monolithic mass but ranged from ardent white nationalists to anti-Muslim activists to football hooligans, underpinned by the mainstreaming of anti-Muslim and anti-migrant prejudice. The movement, its diverse motivations and its interaction with the mainstream must be comprehensively mapped out as part of any comprehensive strategy, to ensure more fine-grained and targeted interventions.

The increasing intermixing of different extremist ideas across online subcultures coupled with the diversification of motivators, often stoked by hostile state influence operations, can be understood in the context of the ‘hybridisation’ of extremism. The emergent threat landscape is more amorphous and interconnected than ever before, with a wider range of malign actors, such as non-ideological school shooter fandoms or hostile states increasingly interwoven with and amplifying violent extremists online. Strategies must move beyond the traditional counter-terrorism threat paradigm to adequately respond to these new realities, rather than going ‘back to basics’ to focus on legacy challenges.

Evolving pathways to violence 

The perpetrators of recent extremist violence range in age. Several older individuals have been arrested, mirroring the age of recent far-right attackers at a migrant centre in Dover, a synagogue in Exeter and in the murder of Jo Cox MP. Meanwhile, children as young as 12 have been charged for their involvement in the riots. This mirrors the decreasing age of Prevent referrals and terrorism offenders in the past 10 years. Rather than merely transplanting adult-centric counter-terrorism policies onto children, their additional vulnerabilities demand the reinforcement of safeguarding approaches and recognition of their dual victim-perpetrator roles.

Engagement pathways, particularly for young people, are not always driven by ideology, but draw from a mix of vulnerabilities including grievances, social isolation, identity-formation, thrill-seeking behaviours, poor mental health and in some cases neurodiversity. Such vulnerabilities will need addressing from a safeguarding-centred perspective. Furthermore, the overwhelmingly male makeup of the rioters also suggests a need for greater scrutiny of the role of masculinity in radicalisation pathways, as well as improved understanding of the intersection of extremism and violence against women and girls.

Prompted by these shifting engagement pathways and vulnerabilities, referrals to Prevent have in recent years mainly comprised of cases deemed unrelated to counter-terrorism. Rather than either being folded into the counter-terrorism apparatus or dropped entirely, such cases need to be off-ramped into other statutory provisions, such as mental health or social care.

While the final section of this article considers the emerging regulation of social media platforms, the recent violence has also raised questions around the application of current laws in the online space. Ongoing prosecutions will test the effectiveness of existing legal instruments – such as hate speech, incitement and terrorism offences – in the online domain. The application of these laws should be thoroughly audited to understand potential enforcement gaps which might be addressed, and the additional resources that may be required to ensure their effective and consistent application to online threats on an ongoing basis.

Investing in a long-term counter-extremism strategy

The proliferation of targeted hate against Muslim communities and migrants over the past two weeks, exists against a backdrop of a broader undercurrent of threats to democracy and pluralism, including harassment of public figures, hostile state information campaigns and the proliferation of harmful conspiracy theories. This demonstrates the urgent need for a renewed strategic counter-extremism effort across government to systematically address the drivers and not just the violent symptoms of extremism today.

Currently, a clear line is drawn between extremism that is violent, warranting a Home Office counter-terrorism response, and extremism that is not, which sits with the Ministry for Housing, Communities and Local Government (MHCLG). This line, however, is increasingly blurry, with recent events clearly demonstrating the close interplay between the two.

There is an urgent need for greater coordination of cross-government responses, to bridge the diverse agencies and departments with portfolios relevant to addressing the root causes and impacts of extremism. Strong No. 10 leadership will likely be required to help align strategic approaches, including across the Home Office and MHCLG where most of this extremism portfolio currently sits. However, a broader cross-Whitehall effort should also include the Department for Education, the Department for Health & Social Care to enshrine a public health approach to prevention, and the Department for Science, Innovation and Technology in its capacity driving social media regulation (discussed below).

National strategy, local delivery

The national strategy, guided by robust data collection and analysis, must trickle down to local delivery. Dedicated central government resourcing should drive more localised responses to extremism and viral mis- and disinformation, ensuring national priorities are adapted to local needs and contexts. Mirroring structures on a national level, Community Safety Partnerships should bring together the different elements required for coordinated counter-extremism approaches, including local authority Prevent, violence prevention teams, hate crime policing and community engagement. These teams should not just be focused on responding to violence when it emerges, but work ‘upstream’ to recognise and respond to its drivers from an early stage.

Currently focused narrowly on countering terrorist violence, there is huge potential in building on such existing infrastructure to develop more streamlined local prevention strategies aimed at tackling the broader harms to communities presented by extremism. Shaped and driven by local government, such approaches should be rooted in meaningful coordination of local education committees, youth engagement services, children and family services, social workers, religious institutions and community policing teams. This engagement should include capacity building to help develop social media monitoring, resilience building campaigns and rapid response capacity from communities impacted by extremism. Crucially, these partnerships can only achieve progress through a long-term programme of meaningful consultation and trust-building with communities.

The Education Secretary’s announcement of a refreshed digital literacy curriculum could make long-overdue progress towards building resilience to online misinformation. At its core, this programme should embed values of citizenship, democratic participation and human rights. Given the age range of far-right rioters, a complementary strategy for adults must also be considered, leveraging influential cultural institutions and sports clubs alongside employers, all of whom have untapped potential to reach key groups.

Ensuring robust regulatory responses

The instrumental role that social media played in driving offline violence after the Southport attack has generated renewed focus on the UK’s new social media regulation, the Online Safety Act (OSA). However, while the OSA was passed in October 2023, the multi-year timeline for its implementation has meant many of its provisions are yet to be finalised and enforced by the regulator Ofcom. As a result, it is still too early to effectively assess its efficacy or make further evidence-based changes to the OSA legislation. We therefore welcome the Government’s decision to wait until Ofcom have made more progress in the implementation and enforcement of the OSA before conducting a fuller review of its effectiveness.

Beyond the proliferation of illegal content observed in recent weeks, questions have been raised (for example by the Mayor of London Sadiq Khan) over whether the scope of the OSA sufficiently addresses the full range of potentially harmful online content and behaviours that may have played a role in exacerbating the riots. The Online Safety Act Network have produced an analysis of the relevant criminal offences already in place, and suggest that much of the viral mis- and disinformation spread in the aftermath of the Southport attacks – which helped lay the groundwork for far-right violence – would not meet the threshold to be dealt with via the illegal content duties of the OSA. They also identify other potential gaps, for example around the algorithmic amplification of harmful content, livestreaming, and the use of private messaging services to coordinate offline violence.

However, the OSA does allow Ofcom to assess the effectiveness of certain larger, higher-risk social media platforms’ (‘Category 1’ services) efforts to enforce their own Terms of Service or Community Guidelines under Section s72(3). While the precise platforms that will be considered as Category 1 is still to be confirmed, it is likely that this will include key platforms such as X, TikTok and Facebook, although smaller or medium-sized platforms such as Telegram may not meet the proposed thresholds.

Whilst Ofcom would not be able to require that certain categories of potentially ‘legal but harmful’ content, such as mis- or disinformation, are included, currently the majority of larger platforms do have policies that cover them. Platforms will also have to address these types of risks as a result of regulation in other jurisdictions such as the EU, where the Digital Services Act (DSA) requires platforms to assess and mitigate the impact of their services on systemic risks to fundamental rights, gender-based violence, civic discourse and a range of other risks, even if such content is not illegal. Social media companies also face other pressures, for example from advertisers, to address these types of online risks.

As a result, unless platforms decide to create divergent policies and systems to enforce them across the different jurisdictions in which they operate, we would expect most platforms to continue to cover these types of risks in their Terms of Service or Community Guidelines. As a result, Ofcom would then be able to ensure that the rules are applied consistently, including in moments of escalating crisis, and also ensure that platforms do not over-moderate and remove content that does not either break the law, or their own rules.

Moving beyond enshrining status quo

Despite this, it is vital that Ofcom’s approach to regulating online platforms does not entrench current industry practices that have been shown to be insufficient. Instead, Ofcom must take a more ambitious approach to enhancing online safety in the UK, as outlined in ISD and the OSA Network’s responses to Ofcom’s recent illegal harms consultation. This is particularly crucial against the backdrop of many larger social media companies systematically cutting the internal resources needed to enforce their own rules on hate, extremism, misinformation and disinformation, including significantly reducing their Trust and Safety teams, leading to deteriorating user experiences and risking non-compliance with regulation.

The interconnectivity of misinformation, hate and mobilisation across a range of platforms in the context of widespread rioting also demonstrates the urgency of effectively regulating not just the largest social media platforms, but also high-risk smaller and medium-sized services too. These services, such as Telegram, have a track record of failing to moderate extremely harmful, violent and often illegal material, either by design or lack of capacity to respond to the challenge. The takedown of several large Telegram channels which played a key role in mobilising far-right riots may have been pivotal in the short-term prevention of further unrest. However, these enforcement efforts are only rarely and sporadically applied. As outlined in our consultation response to Ofcom, it is vital that such high-risk platforms are also a key focus of Ofcom’s efforts to enforce the OSA, using the full range of powers available to them where platforms refuse to engage or cooperate fully.

Although we have recommended that the Government should not make fundamental changes to the OSA in the short term, there are some areas of the Act that could be immediately strengthened without significantly impacting Ofcom’s implementation timeline. For example, the EU’s DSA also contains several provisions related to crisis response, which would be highly relevant in situations like the Southport stabbings and are largely absent from the OSA. Although many platforms already participate in the Global Internet Forum to Counter-Terrorism (GIFCT) and Christchurch Call’s incident or crisis response protocols, these would not necessarily cover the types of violence we have seen over recent weeks in the UK. Changes to the OSA could therefore require in-scope platforms to have provisions in place to respond to emerging crises and allow Ofcom to ensure that they are fit-for-purpose and triggered under certain circumstances.

The urgent need for data access

Finally, the increasing lack of transparency of online platforms remains a significant barrier to assessing risks from social media. Meaningful data access for researchers is mandated for large platforms under the DSA but not in the OSA, meaning that third-party independent bodies, such as research organisations, are unable to either build a systematic picture of the threat landscape on social media, or hold companies or the regulator to account on their enforcement.

Since the OSA passed in 2023, previous options for researchers to access social media data are increasingly being withdrawn in contexts like the UK where they are not required under regulation to avoid unwanted scrutiny from civil society, academics and the media. The situation facing researchers has not just stood still, but actively deteriorated. For example, tools such as Meta’s CrowdTangle was shut down on 14 August, and has been replaced by a system with reduced functionality and accessibility. Access to X data has become prohibitively expensive for the majority of researchers following Elon Musk’s takeover.

Without giving Ofcom powers to mandate such access for researchers, as is contained in the DSA, UK social media users will be kept more in the dark than their EU counterparts about the scale and impact of online harms, the role platforms can play in exacerbating them, and any failures to mitigate them. A flourishing digital research sector would play a vital role in assisting Ofcom to identify emerging online risks and assess the compliance of platforms with their obligations under the OSA. If the Government is planning to conduct a thorough review of the OSA in the coming years, then a more comprehensive and sophisticated evidence base will be crucial to better understand whether the legislation has had the desired impacts in making the UK safer, both online and offline.

Conclusions

The wave of misinformation, hate and extremism which sparked widespread far-right mobilisation in the UK in late July and early August 2024 has re-established the urgency for a comprehensive policy response. The riots did not appear in a vacuum, but are the result of years of normalisation of hate and mainstreaming of extremism. A new strategic framework, rooted in the promotion of universal rights and the safeguarding of democracy, is urgently needed. This framework must adopt a whole-of-society approach to tackle misinformation, hate, and extremism – on and offline.

ISD’s April 2024 paper ‘Beyond Definitions: The Need for a Comprehensive Human Rights-Based UK Extremism Policy Strategy’ provides a policy roadmap for responding to the interconnected threats from hate, extremism and hostile state actor activity facing the UK, and is available here.

TikTok series: Policy recommendations 

ISD identified platform safety improvements for hate speech on TikTok, including better enforcement, clearer policies and filling content knowledge gaps.