Use of words, phrases and hashtags associated with anti-Muslim mobilisation surges amid Israel-Gaza conflict

2 November 2023

By: Hannah Rose and Jacob Davey

Following Hamas’ terrorist attack on Israel on 7 October 2023, and the subsequent crisis in Israel and Gaza, ISD research found a 422% increase in language associated with anti-Muslim hate on the platform X, alongside major spikes in anti-Muslim keywords on alt-tech platforms such as Patriots.win, Bitchute, Gab and Odysee.


Beyond tragic loss of life and the intensification of conflict directly in Israel and Gaza, Hamas’ 7 October terrorist attack has had global repercussions. During conflicts like this, it is not unexpected to see increased manifestations of hate, in this case anti-Muslim hatred and antisemitism, both on- and offline. Muslim and Jewish communities have consistently received blame for events in the Middle East, based on stereotyping of whole communities and their perceived connections to foreign conflicts.  

Between 7 and 29 October, anti-Muslim incident monitoring organisation Tell MAMA received 515 cases (268 online, 247 offline) in the UK, constituting a six-fold increase from the same period the previous year. Of the 268 online cases, Tell MAMA found a majority contained dehumanising language and tropes, including content equating Muslim communities with violence and terrorism. The offline dangers of anti-Muslim hate were highlighted by the killing of a 6-year-old Palestinian-American boy in the United States on 14 October, reportedly connected to the suspect’s consumption of radio news about the terrorist attack and subsequent conflict. The suspect is now facing murder, attempted murder and hate crime charges. This trend has emerged in the backdrop of rising anti-Muslim hatred on social media and offline, including a more than doubling of anti-Muslim incidents in the UK in the last decade, and a rise in discrimination against Muslim communities in the US. 

This Digital Dispatch seeks to provide an initial assessment of the scale and nature of anti-Muslim hate on mainstream and alt-tech platforms following the recent events in Israel and Gaza. These insights are presented in three sections: the first exploring the prevalence of keywords associated with anti-Muslim hate speech on Twitter; the second exploring hashtags associated with campaigns promoting negative anti-Muslim stereotypes; and the third providing insight into the prevalence of anti-Muslim hate on alt-tech platforms. 

Use of keywords associated with anti-Muslim hate speech rose 422% on X after Hamas’ terrorist attack

This section investigates mentions of a list of 86 keywords and phrases associated with anti-Muslim hate speech on the platform X (formerly known as Twitter) using the social media analysis tool Brandwatch. The research was conducted by using keywords to measure their frequency in use on X from 30 September to 18 October 2023. 

While the keywords selected for this analysis promote overt hatred of Muslims, a basic keyword search cannot confirm the intent or context in which these words are used (e.g., if they were used to highlight and counter anti-Muslim narratives). Additionally, keyword-based approaches for approximating volumes of hate speech are not able to accurately identify hateful messaging using coded language. Accordingly, this approach should not be seen as a comprehensive capture of hate speech, but rather as indicative of broader trends in anti-Muslim mobilisation. 

To ensure the accuracy of this keyword list we manually coded a random sample of 100 posts, finding that 81% contained anti-Muslim sentiment, including hateful slurs, dehumanising language or harmful stereotypes about Muslims.  With these caveats in mind, this data can nevertheless be seen to reflect the broader environment of degrading anti-Muslim mobilisation in the aftermath of Hamas’ terrorist attacks. [1]

Graph 1: the volume of posts containing anti-Muslim keywords on X over time

The number of posts containing anti-Muslim keywords spiked during the weekend of Hamas’ terrorist attacks on 7-8 October, resulting in a 422% increase from the previous two days. The entire week of Hamas’ terrorist attacks (7-13 Oct) saw an increase of 250% in anti-Muslim hate compared to the previous week.

ISD researchers found a sustained 297% increase in such posts on X in the five days following the initial spike in posts containing anti-Muslim keywords. Posts containing anti-Muslim keywords were, therefore, not isolated to Hamas’ terrorist attack itself, but has remained elevated throughout the conflict period.

The data suggests that mis- and disinformation played a crucial role in the spread of anti-Muslim attitudes. Among posts containing dehumanising language in the week following the attack, the top link – shared over 500 times – was a blog by Pamela Gellar, who has been described as an anti-Muslim activist by the Southern Poverty Law Center, titled ‘Israel: Babes Kidnapped by Muslim Terrorists Held in Cages’. The link included a photo which claims to show captured Israeli children in cages but has since been proven to predate the current conflict.

Beyond mis- and disinformation, use of anti-Muslim keywords also served to stereotype and essentialise Muslim communities as violent, attributing Hamas’ actions onto Muslims in Western countries. In Geller’s blog title, for example, the phrase ‘Muslim terrorists’ implies a connection between Hamas’ actions and the Muslim community as a whole, centring their Muslim faith (as opposed to the ideology of Islamist extremism) as inherent to their violence.

Hashtags promoting anti-Muslim hate and negative stereotypes about Muslim increase on X, amplified by a range of influencers

In addition to exploring the prevalence of keywords associated with explicit anti-Muslim hate speech on X, we conducted a separate analysis of hashtags linked to campaigns which either promote anti-Muslim hate, or negative stereotypes about Muslim communities.

An initial examination of anti-Muslim hashtags used on X similarly shows a significant increase in online anti-Muslim hate following Hamas’ terrorist attacks. In the 10 days immediately following the attack, use of hashtags #fuckislam, #stopislam and #banislam rose 132% on the previous period.

A specific hashtag that emerged in the aftermath of Hamas’ terrorist attack was #DayofJihad. Originally emerging from a statement made by a Hamas spokesperson’s calls for a ‘Day of Rage’ on 13 October, social media posts quickly generated concern of a potentially heightened threat landscape in Western countries. This was, in many cases, framed to stoke fear of Muslim communities, with anti-Muslim users instead referring to 13 Oct as a ‘Day of Jihad’. This once again related to the harmful stereotypes of the relationship between Islam and violence, and the hashtag was often used to suggest that Muslim individuals across the world could pose a threat to their neighbours due to Hamas’ actions. While concerns for safety were in many cases genuine among the backdrop of rising polarisation and antisemitism, analysis demonstrates how it was used as a forum for anti-Muslim sentiment. as used as a forum for anti-Muslim sentiment.

From Wednesday, 11 October, when the first claims about a ‘Day of Jihad’ emerged, to 15 October, there were 327,000 mentions of the hashtag and associated phrase on X, peaking on the scheduled ‘day’ of 13 October at 175,000.

Graph 2: Mentions of ‘Day of Jihad’ on X

Hashtag and mentions did include scepticism of the so-called ‘Day of Jihad’, and while all comments were not necessarily anti-Muslim, mentions of the term overwhelmingly shared anti-Muslim narratives.  For example, eight of the top 10 retweeted posts mentioning the term explicitly stereotyped or expressed harmful narratives about Muslim communities, or were shared by individuals known to have previously expressed anti-Muslim hate speech.

Influential conservative American accounts from Fox News to Ben Shapiro, Dinesh D’Souza and Ted Cruz all expressed concern over the ‘Day of Jihad’. High-profile accounts expressed sentiments that could stoke fear and tension in the already polarised environment. For example, conservative podcaster Joey Mannarino urged followers not to leave their homes, while actor-turned-pundit James Woods commented that he would be carrying ‘extra ammo’.

 

A prominent theme which emerged from posts about the ‘Day of Jihad’ was an attempt to channel heightened fear into support for anti-migrant positions and policies. Among the most retweeted relevant posts were comments from American activist Laura Loomer, who reportedly described herself as a ‘proud islamophobe’, warning of individuals from Arab countries entering the US, identifying all people from Muslim-majority countries as a threat to Americans. Similarly, a blue-tick verified account called ‘End Wokeness’ with over 1.7 million followers directly equated a perceived weakness in immigration policies with a threat from Muslim individuals. Similarly, a blue-tick verified account called ‘End Wokeness’ with over 1.7 million followers directly equated a perceived weakness in immigration policies with athreat from Muslim individuals.

Anti-Muslim sentiment was by no means isolated to Western communities and online users. Highly followed accounts which name their location as India were found to be influential in driving anti-Muslim discourse. Posts from accounts with over 100,000 followers used dehumanising language about Muslims and accused Muslims of violence in both Israel and globally.

Such sentiment is often expressed in the far-right of the Hindutva movement, a form of Hindu ultra-nationalism which marginalises and threatens Muslim communities. Coordination of anti-Muslim sentiments between activists in India and around the world is not an innovation of this conflict; Hindutva and pro-BJP (India’s ruling party) online activists contributed to the tensions which erupted between Muslim and Hindu communities in Leicester, England earlier this year. Hindutva activists have an opportunity to use the current conflict to share existing anti-Muslim sentiments, exported from India to global anti-Muslim communities on social media.

Alt-tech platforms see increase in anti-Muslim hatred

Utilising the same dehumanising anti-Muslim keywords, ISD researchers used the data analysis tool Pyrra to investigate content on alternative social media platforms including Patriots.win, Bitchute, Gab and Odysee. Across these multiple platforms where extremist movements are known to be active, a clear spike in anti-Muslim hatred can be evidenced, followed by a sustained increase across many platforms in the following period.

Graph 3: Volume of anti-Muslim keywords on alternative social media platforms

The below graph isolates findings from data collection on 4chan, which hosts overtly racist and extremist content. It demonstrates both a spike in the use of anti-Muslim keywords around the Hamas terror attack and a sustained increase in the following days, aligning with trends on mainstream platforms. The week immediately following Hamas’ terror attack saw over a 10-fold increase in the volume of posts containing anti-Muslim language than the previous week.

An initial rise in content from 6 October may relate to users located in the United States and therefore at a time-zone a day behind Israel when the attack occurred early in the morning of 7 October. A review of posts on 6 October confirmed that a majority related to Hamas’ attack, and has therefore been included in this analysis as after the attack. A secondary spike on 12 and 13 October may coincide with the so-called ‘Day of Jihad’ (described in greater detail in the following section), which rallied anti-Muslim communities around fear of Muslim and immigrant individuals.

Graph 4: Volume of anti-Muslim keywords on 4chan

Conclusions

This research offers an initial exploration of the scale and nature of anti-Muslim hatred on social media following Hamas’ terror attack in Israel. While it is not a substantive quantitative assessment of the scale of anti-Muslim hatred on social media across multiple platforms, it demonstrates how the incidences of anti-Muslim hatred jumped in response to the escalation in conflict and the 7 October terrorist attack. While this research focusses on prevalence, ISD intends to use this analytical architecture to measure reach and engagement across multiple platforms.

As the conflict continues to escalate and marginalised communities are targeted, such findings demonstrate the urgency for platforms to ensure they fulfill responsibilities to combat hateful language and hate speech online. This will include both enforcing their Terms of Service around hate speech and, in some cases, local regulation.

 

[1] ISD will be addressing these methodological limitations through the deployment of a bespoke classifier for detecting anti-Muslim hate in the coming weeks, an approach which has already been deployed for our analysis of online antisemitism relating to the conflict.