Misleading and manipulated content goes viral on X in Middle East conflict
14 April 2024
By: Isabelle Frances-Wright and Moustafa Ayad
On 13 April, Iran launched waves of drones, as well as missiles towards Israel, claiming to target military and government sites in response to an Israeli strike on an Iranian diplomatic mission in Syria on 1 April. Echoing the playbook for numerous conflicts over the past six years, the “fog of war” induced by rapid developments on the ground has been used by OSINT accounts, influencers, and in some cases noted public officials and state media outlets as an excuse for sharing unverified and often falsified images as well as videos. Within seven hours of Iranian drones being launched towards Israel, 34 false, misleading or AI generated images and videos claiming to show the ongoing conflict received over 37 million views on X (formerly Twitter).
Key Findings
- Within seven hours of Iranian drones being launched towards Israel, 34 false, misleading, or AI generated images and videos claiming to show the ongoing conflict received over 37 million views on X (formerly Twitter).
- 77% of the accounts posting this content were ‘verified’ paid premium accounts, benefiting from direct algorithmic amplification by X.
- Only two of the posts identified by ISD contained a ‘community note’ (contextualizing information sourced from the X community) at time of writing.
- Several of the accounts spreading false information claimed to be ‘OSINT’ (open-source intelligence) accounts, a continuation of a trend established during the Russia-Ukraine War and Israel-Hamas conflict.
- The Iranian government showed footage from wildfires in Chile on state TV, claiming it showed damage incurred in Israel from their strikes.
Overview
Following the 1 April Israeli strike on an Iranian diplomatic mission in Syria, there have been threats of a full-scale Iranian retaliation against Israel and potentially its allies in the region. On 13 April Iran launched waves of drones as well as missiles towards Israel, claiming to target military and government sites in response to the Syrian strike. Echoing the playbook for numerous conflicts over the past six years, the “fog of war” induced by rapid developments on the ground has been used by OSINT accounts, influencers, and in some cases noted public officials and state media outlets as an excuse for sharing unverified and often falsified images as well as videos.
Within 7 hours of the initial launch of Iranian drones towards Israel ISD analysts identified false, misleading or AI generated image and video content claiming to show the ongoing conflict in the Middle East on X, generating over 37 million views across just 34 videos and images.
The dynamics witnessed on X seem to indicate that during crisis, paid premium accounts can give significant prominence to unverified and sometimes falsified information and content, fostering panic. Similar dynamics were witnessed on the day of the 7 October Hamas attacks on Israel and have been a consistent theme through the Russian invasion of Ukraine. The ‘OSINT’ accounts sharing disinformation on X are lent a sense of legitimacy both by their premium account status as well as by a notable number of other ‘verified’ paid premium followers who often engage with their content. Unlike mainstream media outlets, who do not solely focus on breaking news events, these accounts produce hundreds or thousands of tweets during a breaking news situation, leading to feeds being dominated by content they produce.
Across the 34 videos collected on X by ISD after the onset of the Iranian strikes against Israel, content fell primarily into two categories: repurposed footage of earlier incidents in the current conflict or unrelated conflicts, and content that appears to be AI or computer generated. Content that appears to be AI generated includes images of “supersonic missiles from Iran” as well as President Biden in the situation room in military fatigues. Videos of previous strikes in Lebanon, Syria and Ukraine are also falsely presented as showing Iran’s strike on Israel.
This content is primarily spread by accounts claiming to be OSINT experts, citizen journalists or breaking news outlets. This content is being disseminated and amplified swiftly, given the rapid pace of developments on the ground. One account posting false and misleading video content, which presents itself as a source for “Real time breaking news alerts!”, has 1.7 million followers and tagged the majority of posts during the strikes with “WW111 ALERT” or “WW3 WARNING”. It is possible that social media users are relying on the large followings of such accounts (often over one million), and/or their being followed by public figures or other high-profile users, as a proxy for authenticity when assessing the accounts and the information they disseminate. Several of these accounts have previously shared what was claimed to be footage of civilians in Gaza that were killed by Israeli strikes, who were in fact Syrians killed by Russian and Syrian regime strikes.
Most accounts identified by ISD analysts on X spreading this type of content were ‘verified’ paid premium users, allowing the content to receive algorithmic amplification from X. When combined with this pay-for-play amplification, the posting of misleading and inflammatory content allows these accounts to fill a vacuum of verified information in crisis events, further increasing their followings and influence.
The Iranian government, in an apparent attempt to show evidence of military wins, ran repurposed footage of Chilean wildfires on state TV, claiming they showed damage incurred in Israel from the strikes. These images were then widely spread across social media.
Only two videos identified by ISD analysts contained a community note at the time of analysis, though it seemed the videos had already been widely circulated prior to the notes being applied. While community notes show promise as a strategy to counter the spread of false information, they must be applied faster during crises to be effective. Equally, once a community note is applied to a piece of false content, it must be applied to all duplicates across the platform.