14 November 2022
On 30 October, a migrant processing centre in Dover was allegedly firebombed by a 66-year-old suspect who was later found dead at a nearby petrol station, having apparently taken his own life. Analysis of the alleged attacker’s social media activity across platforms paints a picture of an individual embroiled in a hateful and conspiratorial online world, with the police describing the attack as “targeted and likely driven by some form of hate-filled grievance”. But while the investigation was taken over by counter terrorism police days after the attack, the bombing has not yet been designated an act of terrorism (defined as violence for the purpose of advancing a political, racial or ideological cause, according to UK law).
In the analysis below, we lay out several key trends emerging from the attack, as well as implications for how counter-terrorism policy must adapt to prevent these kinds of violent acts in the future.
1. The attacker was immersed in a hateful and conspiratorial online ecosystem, rather than associated with organised extremist movements
Social media analysis reveals that the alleged attacker was immersed in an ecosystem of extreme right-wing, conspiratorial and hateful content since at least 2014. The content he was interacting with likely both influenced and mirrored his own increasingly violent rhetoric.
The alleged attacker’s cached social media feeds reveal posts expressing a broad spectrum of hate including antisemitism, anti-Muslim, anti-migrant, anti-gay and anti-trans sentiment. He expressed resentment towards migrants, in particular Muslim migrants whom he associated with ‘grooming’ gangs, saying “people smugglers are bringing paedophiles into the country” and that women and children “will never be safe.” Antisemitic references include a retweet of a video of a ‘Rabbi Yaron Reuven’ who held Jews responsible for ‘degeneracy’ in the German Weimar republic in the 1930s. He also expressed hate towards members of the LGBTQ+ community, reposting anti-trans memes and accusing gay people of abusing children.
Conspiracy theories were seemingly central to his online universe. He repeatedly referenced a supposed ‘New World Order‘, the hidden agenda of mainstream media, the power of Klaus Schwab and the World Economic Forum, and narratives linking COVID-19 vaccines to health issues and authoritarianism.
Notably, there has so far been little evidence of association with formal violent extremist groups, although he did share content on Facebook from entities like Turning Point UK, the white nationalist Traditional Britain Group and far-right ideologue Stephen Yaxley Lennon, aka Tommy Robinson. This trend speaks to the increasingly post-organisational dynamics present within contemporary right-wing extremism, where mobilisation is more closely tied to engagement with online extremist subcultures than membership of specific violent groups.
It is crucial that the UK’s revised counter-terrorism (CONTEST) strategy engages with this kind of extremist threat, associated with an increasingly fragmented threat landscape, where individuals are gradually socialised to acts of grievance-fuelled violence rather than explicitly ‘targeted for radicalisation’. Despite being a clear case of ideologically-motivated violence, current definitions of terrorism are seemingly failing to capture this growing category of threat.
2. Mainstream social media were the core platforms for the attacker’s hateful diatribes
Despite growing interest in extremist use of alternative platforms like Gettr, Truth Social and Gab, this episode shows the continued importance of mainstream social media platforms like Facebook and Twitter in the trajectory of radicalisation, especially among extremists of older generations.
While there is evidence Twitter and Facebook seemingly did eventually take action at an indeterminate point, Leak’s cached social media posts indicate prolific posting of content contravening the companies’ terms of service. According to The Times, the attacker’s social media accounts have been suspended on multiple occasions, although ISD found no evidence that his accounts were taken down ahead of the attack, despite increasingly concerning patterns of posting, including direct threats and calls to action. While not all of his earlier Twitter posts contained hateful ideas, the attacker started posting anti-migrant and anti-Muslim content as early as 2014. It appears that from 2019 onwards, posts on Twitter and videos uploaded on YouTube started to feature more serious indications of conspiratorial thinking and violent ideation.
His most recent Twitter account was active since May 2022 and has recently been suspended. He characterised himself as “defender of free speech”, “protect[er] of women and children” and “anti-child grooming”. He appeared to be highly active on the platform, posting or retweeting numerous times per day. He used the platform to express his hate towards (Muslim) migrants, members of the LGBTQ+ community, mainstream media and government officials. Several tweets contained out-right hate, such as a claim that “gay men abuse young boys” or a call to “get the spitfires out” in reply to a video of migrants at the Turkey border. Some of his tweets contained calls to action, for example, to “get ready” because “you will not survive what is coming” or that “we are in a fight for our lives… Stand up or be walked all over”. Such rhetoric could be interpreted as indicative of his willingness to act. Roughly an hour before Leak is alleged to have conducted his attack, he tweeted: “we will obliterate them Muslim children are now our target. And there disgusting women will be targeted mothers and sisters Is burn alive”.
Such content shows the limitations of narrow conceptions of content moderation policies focused on ‘dangerous organisations’, which fail to recognise the role of the broader swathe of harmful content outlined above in moving individuals towards violence. It also demonstrates the hybridised nature of such harmful online content, which cannot necessarily be neatly classified as ‘terroristic’, but rather establishes a mood music of dehumanisation which can so easily lay the ideological groundwork for violent attacks like that in Dover.
3. The attacker is part of an older demographic, which belies a growing focus on ‘youth radicalisation’
There has been considerable concern around the vulnerability of young people to online radicalisation, with a cliched picture emerging of the ‘angry young isolated man hooked on YouTube.’ However, the demographic of this attacker, shows that the isolated older man on Facebook, may well be a category of vulnerability to radicalisation requiring greater analysis, and shows the need to nuance the predominant usage of ‘profiles’ in the CT system, to instead start recognising behaviours that are likely to be exhibited by someone being socialised to violence.
This demographic has a notably different aesthetic to the ‘chan’ based radicalisation paradigm of younger Gen-Z audiences, who are defined by the consumption and production of highly stylised accelerationist content. Looking at the images and videos shared by the alleged Dover attacker on social media, we see a distinct meme culture rooted in the nostalgic romanticisation of a perceived glorious British past. The potential influence of such online aesthetics, particularly when presented in juxtaposition to an alleged ‘degeneracy’ of ‘British culture’, should likely be further examined in understanding the journey of some older people towards various kinds of hatred and conspiratorial belief and, in some cases, violence.
The social care and ‘safeguarding’ impetus of CVE policy has increasingly shifted to consider youth as primarily, or uniquely, susceptible to radicalisation. But a large proportion of terrorists and people responsible for mass violence of varying kinds eschew this demographic and have been older; such as Thomas Mair (53), Vincent Fuller (50), Derrick Bird (52), Thomas Hamilton (43), Darren Osborne (48), and Stephen Paddock (64). These older perpetrators typically have a long process of radicalisation; they normally have difficulties in their personal lives (just like younger offenders), and are unlikely to have many close contacts. Whilst online evidence suggests a longer-term radicalisation trajectory in the case of the Dover attacker, with evidence that his final state of mind may have been driven by the death of his son and his own incurable cancer, the COVID pandemic has also had a considerable impact in exacerbating loneliness and isolation in general, as well as increasing time spent online in the last two years.
4. Anti-migrant hate is becoming increasingly core to the online far-right extremist playbook
Core to the attack targeting, and a central object of online hate in the run up to the attack within the accounts analysed, was an obsession with migrants as a core outgroup deserving of demonisation. Upcoming ISD digital analysis of online extremism in the UK has found that migration-focused discussion (discussing issues like ‘dover’, ‘border’, ‘migrants’, etc.) constituted the largest extremist cluster or ‘tribe’ – a grouping highly linked to conspiracy communities, white nationalists and traditional far-right networks.
Such rhetoric sits in a wider political context. The day after the attack on the immigration processing centre, UK Home Secretary, Suella Braverman, described the arrival of asylum seekers on British shores as an “invasion”. She also asserted that “many of them” are either facilitated by criminal gangs or are members of gangs themselves. She explicitly warned against “pretending” that the asylum seekers were all “refugees in distress.” Braverman is a staunch advocate of the plan to send asylum seekers to Rwanda, stating that it is her “dream” to see such a flight taking off. Both Braverman and her predecessor, Priti Patel, have spoken out against efforts by organisations and lawyers to stop the Rwanda-bound flights. Patel’s rhetoric – which included the “lefty lawyers” and “activist lawyers” – was cited by a would-be knife assailant as he threatened to kill an immigration solicitor in September 2020.
Sadly, such violent targeting of migrants has been common place for years in Europe. During the peak of the so-called refugee crisis in Germany, attacks targeting refugee centres spiked. According to numbers by the Federal Crime Agency, there were 1031 crimes against refugee centres in 2015 and 988 in 2016. This included 94 arson attacks against refugee centres in 2015, and 66 in 2016 (compared to 6 in 2014 and 17 in 2017), the violent tip of a growing Europe-wide disinformation ecosystem targeting migrants and refugees.
The recently announced refresh of CONTEST, the UK’s counter-terrorism strategy, combined with ongoing discussion over the future direction of Prevent, puts efforts to counter mass violence at a crossroads.
The Dover attacker’s social media history illuminates multiple issues with our current approach. The increasingly singular focus on ‘vulnerable’ younger terrorists has created a blind-spot for older perpetrators and the radicalisation of an older generation of people, statistically more likely to be involved in acts of terrorism, often driven by hatred towards various marginalised groups rather than a coherent ideology.
While the police must of course ascertain the relevant details before investigating under the auspices of terrorism, the delays in handing this case to counter-terror police, and the lack of announcement as to whether it is being treated as terrorism, provide deeply unhelpful optics. They signal to the public that the case is not being treated as seriously as other attacks, giving the impression that hatred of migrants is considered a lesser sin, or more legitimate, than hatred of other groups or violence driven by a neatly categorised ideology. Given the growing far-right threat in the UK, it might be prudent for attacks of this nature against migrants to be considered for links to terrorism from the outset.
Officials have expressed concern over the rapid increase in referrals to Prevent that come under ‘Mixed, Unclear, or Unstable’ (MUU) ideologies, a miscellaneous category that is now the single largest category of referrals. There is a clear need to look into these cases in detail and understand what is driving these figures, and ensure a category as potentially broad as MUU is focused on actual risk and not a catch-all for society’s most fractured individuals. In addition, there should be a review of whether the current approach to the categorisation of extremist ideologies is the most effective way for government, law enforcement and civil society to understand and act on grievance-fuelled violence and terrorism.
The case also highlights issues for social media platforms, whose content moderation policies must reckon with the deluge of hateful content posted by some users without significant repercussions. This is true not only on the ‘alternative’ platforms that have become an increasing focus of attention, but the mainstream platforms like Facebook and Twitter, where this alleged attacker spent years producing and amplifying content that, when viewed in their entirety, should have been considered a significant red flag.