Illegal, violent terrorist content relating to Hamas-Israel conflict reaches millions on X

Last updated 29 November 2023, originally published 13 October 2023

By: Moustafa Ayad and Tim Squirrell

X [formerly Twitter] is an epicentre for content praising the 7 October attack perpetrated by the Hamas-linked al-Qassem Brigades, generating millions of views in single 24-hour period, while escaping any moderation of any kind. In the seven days after the attack, ISD researchers were monitoring the spread of terrorist, and violent, content, as defined by the European Union, across X and other platforms, and found that specific hashtags in support of al-Qassem Brigade operations have become vectors for terrorist and violent content on the platform. Subsequent analysis has shown how weeks later, the vast majority of this content has remained on the platform, despite user reporting. 


In a 24-hour period from 11-12 October, ISD analysts surfaced 128 posts containing content glorifying terrorist violence on X (formerly known as Twitter), primarily linked to the Izz ad-Din al-Qassem Brigades militia group, which led a multipronged, complex attack in southern Israel on 7 October.

At time of writing (14:30 UTC+1 on 12 October), this content had a cumulative reach of over 16 million views on the platform, with individual post engagement ranging from ~50 to 2.2 million views. In the period from Wednesday night (Oct 11) to Thursday morning (Oct 12) London-time, the dataset garnered at least an additional 2.5 million views.

These posts originated from 45 unique accounts, with a collective following of more than 3 million followers. Since 7 October, when Hamas militants launched a series of ground attacks in Israel, they have circulated around 15 distinct pieces of content branded under the iconography of al-Qassem Brigades and its affiliated media brand. Of these 45 accounts, at least 20 have ‘blue tick’ verification, meaning they have likely paid for some level of Twitter Blue subscription.

At the original time of writing (13th October), none of the content identified by ISD had been removed or labelled by X, , despite the majority violating the platform’s violent and hateful entities policy that are a part of their own rules and policies. Retrospective analysis of 111 posts from the dataset in clear violation of this policy three weeks after the initial research showed that only 7% of posts were unavailable, whether due to proactive detection or user action. Subsequent flagging of violating content using user reporting tools resulted in only three further pieces of content being removed over the following four days, with 90% remaining online.

Although all 128 posts depict graphic scenes of violence – including desecration of corpses and close-range footage of gunfights with apparent fatalities – only two had a label for ‘sensitive content’. Three weeks later, this had risen slightly to 5% of violating content in the dataset containing a sensitive content warning. Other videos seem to contain CCTV footage in which civilians are shot and subsequently dragged from their vehicles by al-Qassem militants.

Most posts contained branded content from al-Qassem Brigades, a designated terrorist group in the EU, and is therefore likely illegal under either the EU’s Regulation to address the dissemination of terrorist content online or the Framework Decision on combating certain forms and expressions of racism and xenophobia by means of criminal law, or both. As a result, the content would also fall within scope of provisions under the EU Digital Services Act (DSA) requiring platforms to remove illegal content and to enforce their own Terms and Conditions.

Much content was surfaced using the hashtag ‘al-Aqsa Storm’ in Arabic, which has been cited over 1.9 million times on X since 7 October according to data collected via Brandwatch. Many posts using this hashtag are providing updates on the conflict writ large, without reference to violent or terrorist material. However, this means average users trying to engage with the rapid escalation of events on the ground will likely be exposed to al-Qassem propaganda, branded or otherwise.

‘al-Aqsa Storm’ as well as ‘al-Aqsa Flood’ is being adopted by the al-Qassem Brigades, as well as supporters, to describe the Hamas-led incursion into Israel and the attacks carried out against both Israeli military personnel and hundreds of unarmed civilians since 7 October. The hashtags affiliated with the names used for the operation has become more popular through use by influential figures, including Iranian Ayatollah Ali Khamenei who adopted both the phrase and the hashtag in a post showing footage of civilians fleeing fighters at a concert near the Israel-Gaza border.

Example: One pro-Kremlin, Arabic-language account with 199,000 followers posted a video on 11 October showing the desecration of an IDF soldier’s corpse, achieving over 640,000 views – while the post itself is not promoting terrorism, the content was taken from an al-Qassem militant’s GoPro or body cam and has no label or warnings applied by X.

Content appears to be circulating within Telegram groups before travelling to X. New al-Qassem content would appear in specific Telegram channels, and then be shared through those networks on public fora like X.  Additionally, accelerationist neo-Nazi groups operating within the ‘Terrorgram’ network have been sharing the same content, hoping to inspire mass attacks on Jewish communities.

At least one of the most popular videos (362,300 views), which contained content created by Hezbollah, was shared by War Monitor, an account previously promoted by Elon Musk as a “good” account for following the war in real time. Musk subsequently deleted this tweet after users flagged that the account had engaged in antisemitism.

Note: This article, originally published on the 13th October 2023, was updated on the 29th November 2023 to include additional data findings on platform action taken by X on flagged content.