Platforms fail to label and remove AI-generated and manipulated election content

19 September 2024

By: Isabelle Frances-Wright and Ella Meyer


With the US election rapidly approaching, ISD has found evidence that platforms are failing to remove or appropriately label even the most widely debunked deceptive AI-generated content or more simplistically edited deceptive media, with individual pieces of content often garnering thousands if not millions of views. This is despite significant public commitments to do so from platforms such as Meta, TikTok and YouTube 

This alarming and systematic failure of the platforms to detect and mitigate such content calls into question their ability to detect more sophisticated AI influence operations targeting the election, particularly from state-backed actors such as Russia, China and Iran, who have already been shown to be leveraging AI within their operations.  

It also calls into question the efficacy and utility of coalitions such as the Microsoft led “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” in which platforms (both social and AI) committed to sharing detection technologies and leads.  

Platforms are failing in their commitments to mitigate deceptive AI-generated content

In an analysis across Facebook, Instagram, YouTube, X and TikTok, ISD sought to identify instances of 12 widely debunked and publicized AI-generated images and videos, in order to assess whether platforms are succeeding in honoring their commitments related to election-related AI-generated content (AIGC) and edited media at the most basic of level.  

Across the AIGC incidents, ISD easily identified 154 election-related AIGC or edited content instances that platforms failed to properly remove or label. This content was easily accessible, and the number of instances would have been significantly higher over a longer analysis period. Across those 154 pieces of content, 51 million impressions were identified, clearly showing that this was content that likely would have gone through multiple rounds of platform safety moderation.  

On Facebook and Instagram, impressions were not available. It should also be noted that Meta proactively chose to deprecate its researcher platform, CrowdTangle, mere months before the US election. This was despite calls from researchers and US Congress to delay the deprecation until after the election, calling into question Meta’s approach to transparency, as well as their confidence in their own trust and safety functions.  

“Swifties for Trump” 

Following the cancellation of Taylor Swift’s Vienna concert dates due to the threat of a terrorist attack, AI-generated photos of Taylor Swift and Taylor Swift fans, known as “Swifties,” proliferated across mainstream and fringe platforms. The AI-generated images included fans wearing “Swifties for Trump” shirts; Swift wearing a MAGA hat; and Swift displaying a flag that stated “Trump won.” The content was often accompanied by claims that “Swifties” and Taylor Swift herself were supporting Trump due to the belief he could effectively combat terrorist threats targeting her tour[1]. Across X and TikTok, this content received 17.2 million views.  

In Swift’s subsequent endorsement of Vice President Kamala Harris, made on September 10th, she stated that the AI-generated content generated depicting her supporting Trump compelled her to publicly voice her support for Harris and highlighted her “fears around AI, and the dangers of spreading misinformation.” 

Kamala Harris’ crowd size images

As of the time of publication, there are two images which have been widely circulated and claimed by certain platform users to be AIGC created and disseminated by the Harris campaign. ISD identified the first image as authentic and being deceptively used to further the false narrative that the Harris campaign was using AI to generate crowd images at her rallies.  

The second image circulating online was AI-generated and had been originally posted by a parody account on X on August 10, claiming that it was of a Harris-Walz rally. While the account had parody in its bio, the post itself did not include a parody disclaimer and was initially not labeled with a community note. The image spread rapidly across platforms, alongside a false claim it had been shared by the HarrisWalz campaign, this narrative quickly gained traction, despite the campaign never sharing this photo. On all platforms, ISD found multiple posts sharing the AI-generated photo and falsely attributing the photo to the Harris-Walz campaign. On Instagram, posts including this content received over 17,000 likes. While Instagram does not track views for content, it can be assumed based on the like count that the content received significant engagement.  

Kamala Harris’ speech at Howard University

A video of Kamala Harris speaking incomprehensibly was identified on X, Instagram and YouTube. In the video, Harris says “today is today and yesterday was today yesterday. Tomorrow will be today tomorrow”. The video was originally posted and debunked as AIGC in 2023 but reappeared on social media in July 2024. On X, ISD identified multiple unlabeled posts featuring the video, with the earliest post from May 2023. In total, the posts received over 1.6 million views.  

Joe Biden using profanities after dropping out of the race

A video of President Joe Biden using profanities after announcing his decision to withdraw from the 2024 presidential election was identified on X and Instagram. The video, which contains an AI-generated clone of Biden’s voice, was initially shared on X, which still has not labeled the content as AI-generated. On X alone, unlabeled posts that featured the video received over 28 million views.  

Additional instances of AIGC and edited media

In addition to these examples, ISD looked at other election related AI-generated content. A significant amount of content on all platforms was targeted at VP candidate Walz. Across all platforms that track content’s reach, a small number of Walz-related AI-generated content instances amassed 1.5 million views. 

On X, an account shared a digitally altered video of Vice President Kamala Harris and Tim Walz posing in front of a Revolutionary Communists of America sign. X was slow to label the original post as manipulated media, which received 400,000 views before a community note was added to the post. Unlabeled screenshots of the video continue to circulate on X and Instagram. The content was part of a trend attempting to tie the Harris-Walz campaign to communist ideologies. On Instagram alone, unlabeled and manipulated images of Harris and Walz with communist-related symbols had over 11,000 likes. 

In addition to AI-generated content, platforms have repeatedly failed to remove digitally altered photos. For instance, an edited photo of Kamala Harris in a sexually suggestive outfit was identified on all platforms. This once again calls into question platforms ability to mitigate the threat of AI, when images manipulated through more simplistic means (often referred to as “cheapfakes”) still evade detection and enforcement.  

Platform policies and responses  

As previously assessed by ISD in a comparative analysis, many of the platforms have updated their policies or released public commitments on synthetic media in the last 12 months, likely due to both the proliferation of AI-generated content combined with the upcoming election. The policies, however, are often vague in nature when determining which content meets the threshold for removal and are instead relying on labeling.  

Notably, an oversight board case which examined Meta’s manipulated media policy, recommended that the policy be expanded to include incidents where speech was not involved, stating that Meta “should take prompt action to amend that policy to bring it into alignment with its stated purposes.” At the time, the policy would have removed falsely generated speech of a presidential candidate, but not video of a candidate where they did not speak.  

Where the platforms fail to be transparent in their highly publicized plans to combat deceptive AIGC raises concerns about their internal detection mechanisms for identifying AIGC within moderation processes. Based on ISD’s analysis, in which certain content showed little doubt as being AI generated within a range of detection tools, and yet remained unlabeled on the platforms, it appears there may not be any type of systematic detection of AIGC within moderation processes.  

Additionally, when AIGC is detected via external sources such as forensic analysis publicized by the media or debunked via other OSINT strategies, this information appears to not be effectively fed back into platforms moderation processes. Nor are internal tools for identifying duplicate content being effectively deployed. Through Google’s reverse image search, for example, many instances of widely debunked deepfakes can be found across platforms. This calls into question platforms’ own investment in tools if a basic tool as reverse image search is finding duplicate content of debunked AIGC that platforms have not yet detected themselves (assuming they would apply their policies when detection takes place).   

Repeat offenders are evading account level actions  

By failing to detect, remove or label individual instances of AI-generated or manipulated content, repeat offenders can more easily evade account actions such as temporary suspensions. On X, for example, one account ISD studied has continually shared deceptive AI-generated content. Despite this, it appears X has not taken action against the account. The account has received over 11 million views on its election-related AI-generated content. The platforms’ lack of policy enforcement allows harmful AI-generated content to remain accessible on the platforms which can have adverse effects on the public’s trust in online content. 

Media debunks are not reaching voters  

While the AIGC assessed by ISD had been widely debunked by, and publicized in the media, it was clear from responses within comments on posts that they had not reached platform users, who stated their belief in the content being authentic even after debunks had occurred. For example, an AI-generated news article that includes false information about VP candidate Tim Walz was still receiving comments in which platform users stated their belief in the content’s authenticity as recently as September 4, despite being debunked on August 12 

End notes 

[1] ISD will not be linking to AIGC content as not to spread disinformation.