Disconnected from reality: American voters grapple with AI and flawed OSINT strategies

7 November 2024

By: Isabelle Frances-Wright, Ellen Jacobs, Ella Meyer 


In the US presidential election cycle, the impact of AI-generated content was predicted, debated and theorized on. Much of the analysis has focused on the impacts of individual pieces of content on voters. However, the broad and multifaceted effects of a landscape rife with AI-generated content and discourse around the tools themselves cannot be understated.  

While there have been reports downplaying the impact of AI in the 2024 election, an analysis of content by ISD shows that this may be a flawed assessment. ISD reviewed and coded a random sampling of 300 posts across X, YouTube and Reddit in which users discussed election topics and referenced AI. Through its analysis, ISD found that users were misidentifying content in 52% of cases, often claiming authentic content was AI-generated and justifying their assessments with flawed OSINT strategies or unreliable online tools.  

In conversations where a specific piece of content was not mentioned, but users were still discussing AI in the context of the election, 44% of posts included broad accusations that one candidate, political party, or group of voters systematically used AI for deceptive purposes. This was used to imply that nothing emanating from those actors could be considered authentic. 

Even when voters accurately assessed the authenticity of a piece of content, this was often combined with other false information or conspiracy theories and taken as a justification to dismiss similar, but authentic, content as AI-generated. Social media users also expressed fear and confusion at how to identify AI-generated content, or, in more existential terms, how to maintain a grip on reality. 

These risks have occurred in tandem with social media users potentially becoming desensitized to the risks of AI by more nascent tools in which AI generation is easier to spot without the use of more sophisticated tools or strategies. This desensitization posed a particular risk on election day, when time bound events serve as the most fertile ground for hyper-realistic AI-generated content to be disseminated.  

Key Findings: 

  • Since August, we’ve seen 1.1 million individual social media posts with an estimated 16.29 billion views discussing AI in the context of the election on X alone. 
  • It appears social media users are inaccurately assessing the authenticity of content over half (52%) of the time. 
  • In an alarming trend identified by ISD, a number of platform users are claiming or insinuating everything disseminated from either of the Presidential candidates or their supporters are AI-generated. In content discussing AI generally in the context of the election, 45% included these types of claims.  
  • When a specific image/video/audio was falsely assessed by platform users as AI generated, the posts also included broad accusations or insinuations of AI generation by a specific political party or group 31% of the time. 
  • One of the surprising findings within the sample was how frequently social media users were discussing inaccuracies with AI text generation or voice assistant tools, and their role in potentially interfering in the election. Within the sample data, discussion around text-based AI and AI audio assistants accounted for 21% of the sample. 
  • While AI has become more realistic as technology develops, social media users are often relying on outdated or flawed strategies to identify whether content is AI-generated. Users that made a false assessment of content’s authenticity, attempted to use OSINT strategies or AI detection tools 13% of the time.
  • Online AI detection tools can provide faulty assessments or be weaponized to fool social media users via doctored videos and images.

AI Is Here and Voters are Struggling to Discern Real from Fake 

 In the last three months, we have seen 1.1 million social media posts with an estimated 16.9 billion views discussing AI in the context of the US election on X alone. The online discourse covers a wide range of topics:; attempts at assessing content, false accusations of AI, confusion amongst voters as to the truth of reality, and candidates themselves posting AI-generated images.  

 Despite widely publicized debunks of AI-generated content, platforms are failing to remove or appropriately label the content. This becomes more troubling when, according to ISD’s analysis, it appears social media users are inaccurately assessing whether content is authentic 52% of the time. Users are more often accusing authentic content of being AI-generated than they are claiming AI content to be authentic.  

“Everything is AI” 

 In an alarming trend identified by ISD, a number of voters were claiming or insinuating everything disseminated from either of the presidential candidates or their supporters is AI. This erodes the information ecosystem, which serves as a rationalization for voters to discredit everything from one side or another as AI-generated. In content discussing AI generally in the context of the election, 45% included these types of claims.  This further plays in to the “liars dividend” effect, where voters begin to lose faith in the information ecosystem in general.  

When a user inaccurately assessed a piece of content as authentic or AI-generated, this kind of broad accusation of universal AI generation was included in 31% of the posts. This may show that when a user accurately assesses a piece of content as AI generated, they take that as a blanket reasoning to assume everything else evidencing that same stance or ideology is also AI generated.  

The idea that everything is AI, and we can no longer accurately discern generated content from reality, may lead some to disconnect from the information ecosystem altogether. One X user stated: “It’s all AI bullshit. Republicans, Democrats, Russia and whoever are all using it to bombard us with their version whatever illusion they want to peddle today to influence. I frankly am not sure where you can seek reality anymore.” 

Discerning Deepfake Creators and False Accusations 

Assessing whether a piece of content is AI-generated is now just one piece of the puzzle that voters must untangle. Inaccurate assessments as to who created and spread the content can have just as much impact as content itself.  

For instance, two images began circulating about the crowd size at Vice President Kamala Harris’s rally. One was authentic, and the other a deepfake accompanied by a false narrative that the campaign had released the video. A prominent right-wing influencer claimed on X that a Harris campaign staffer created and disseminated the deepfake. This individual further claimed that “this is a form of psychological warfare and election interference and manipulation”. This post garnered 3.6 million views and does not have a community note. In fact, the image had originally been created and gained traction via an account that describes itself as “parody” within the bio. Despite this, the content repeating the false narrative that the Harris campaign was sharing AI-generated images received tens of millions of views online. 

Another post with nearly 500,000 views claimed that Democrats had created the AI generated video of Trump insulting voters, and therefore Democrats are using AI to lie, cheat, and steal, because they are unable to win fairly.  Both examples aim to degrade trust in the efficacy and fairness of the electoral process, setting the stage for future narratives seeking to portray AI as a mechanism used to commit election fraud.  

Bias in Text Based AI and AI Audio Assistants Plays a Dominant Role  

One of the surprising findings within the sample was how frequently social media users were discussing inaccuracies with AI text generation or voice assistant tools. Within the sample data, discussion around text-based AI and AI audio assistants accounted for 22% of the sample. Beyond discussing the inaccuracies or errors in bias, they were often casting aspersions that this was evidence of AI tools being intentionally rigged by their owners to influence voters’ opinions.  

One prominent example included Meta’s AI assistant insisting the attempted assassination of President Trump didn’t happen. While this was likely an error due to the models training data not having news of current events, many voters felt Meta, and specifically Mark Zuckerberg, was trying to directly influence the outcome of the election. One user, in a tweet with 2.8 million views, said “we’re witnessing the suppression and coverup of one of the biggest most consequential stories in real time.  

Another prominent incident occurred when Amazon’s voice assistant Alexa responded when asked why someone should vote for Vice President Kamala Harris, but refused to answer the same question about Donald Trump. The videos of this incident quickly went viral, with users responding with statements such as “Hey Speaker Johnson ask Jeff Bezos if his Amazon Alexa AI is a campaign contribution or election interference, we want to know?”.  

Setting the Stage for Post Election Results  

Beyond claims of AI being used by certain political actors or groups, conversation has begun to shift to discussing AI’s impact on the administration of the election and its role in certifying votes. One X user posted, “Artificial Intelligence is going to decide the outcome of the 2024 election”. Elon Musk referenced AI interference in a recent rally speech stating, “The last thing we want is electronic voting machines. We want paper ballots, in-person, with ID. Advanced AI will be super good at hacking computers. If you have voting machines that are connected to the internet and you have advanced AI that can potentially affect those machines, that’s very dangerous.” Just one clip of this quote received 15 million views on X. 

Flawed OSINT Strategies Undermine Media Literacy  

While AI has become more realistic as technology develops, social media users are often relying on outdated or flawed strategies to identify whether content is AI-generated. This includes focusing on hands, reflections, the whites of a humans eyes and lettering on objects to identify the content’s authenticity. When platform users made a false assessment, 58% included failed attempts at using OSINT strategies or AI detection tools, showing just how ineffective these strategies can be.  

Online detection tools can provide faulty assessments or be weaponized to fool social media users via doctored videos and images. ISD identified a prominent example of the latter in which an X user, in a post with 5 million views, claimed to have run a phone call made by President Joe Biden to Vice President Harris through an AI audio detection tool. In the video, the user uploads a file named “Biden’s ‘audio call’ to Kamala Harris” and the AI audio detection tool comes back with a 98% probability that the audio was AI-generated. Many social media platform users relied on this video as proof that the phone call was fake, despite the official AI audio detection tool’s company saying the original poster ran a different clip through the tool in order to produce the “AI-generated” result.  

In another example, an X user claims that a different photo taken of a Harris rally was AI-generated. The post, which garnered 1.7 million impressions, included  a screenshot from an AI image detection tool which states the photo had a 92% chance of being artificial. However, multiple fact-checking organizations have stated that the image is authentic, meaning the AI image detection tool was either incorrect or given another photo to detect AI. Despite this, many users stated that the screenshot from the AI image detection tool proved their suspicions that the photo was AI-generated, citing hands, arms, and shadows in the photo that they thought were suspicious. 

Mediocre AI Content May Provide a False Sense of Security  

Images made with certain tools, particularly free or early-stage versions, can sometimes be easily identified. This offers a false sense of security to voters that they may be well equipped to identify content. One social media user stated, “if this is AI, we have nothing to worry about”.  X’s Grok, for example, is frequently used to share AI content on the platform, but is not very realistic, particularly when depicting Vice President Kamala Harris, leading to users’ over-confidence in their ability to spot AI-generated content.  

Policy Landscape 

Given the rapid development and widespread deployment of generative AI technologies, alongside the almost instantaneous use of it by bad actors to deliberately mislead others or cause harm, there has been a push for lawmakers to provide guardrails around the responsible use of generative AI. In October 2023, the Biden Administration released an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence and both chambers of Congress appointed AI task forces. Senate Majority Leader Chuck Schumer led a series of convenings with academics, civil society representatives, and industry resulting in a roadmap for AI policy in the Senate. On the House side, Speaker Mike Johnson and Minority Leader Hakeem Jeffries established a bipartisan task force of 24 House members to create a roadmap for AI policy. However, Speaker Johnson has recently indicated that he does not want large regulatory approaches that might create “red tape” for AI development and that the House AI report is likely to not recommend specific pieces of AI legislation. 

Many of the existing federal legislative proposals have focused on mandating the labelling of all AI-generated content, such as Senators Schatz and Kennedy’s AI Labeling Act. Others have focused on addressing the harmful uses of generative AI, such as the creation of nonconsensual intimate image (NCII) deepfakes which Rep. Ocasio-Cortez’s DEFIANCE Act and Senators Cruz and Klobuchar’s TAKE IT DOWN Act address. 

State-level policymakers have also been active in proposing AI and media literacy legislation. National attention was focused on California’s Governor, Gavin Newsom, who was sent a bevy of AI-related bills, including the controversial S.B. 1047. Newsom ultimately vetoed the bill which would have set some of the country’s strictest testing standards required by companies prior to deployment and made them liable for critical harms.  

While many legislators have focused specifically on the development and rollout of AI technology itself, ISD’s research makes clear that media and digital literacy education will be a critical component when preparing society for an influx of AIGC. Delaware became a leader in media literacy efforts when the state passed a law in 2022 requiring media literacy standards to be developed and maintained for students K-12. New Jersey similarly passed a law requiring media literacy to be included in K-12 curricula in 2023. 

Conclusion 

The rapid increase of AI-generated content has created a fundamentally polluted information ecosystem where voters are struggling to assess content’s authenticity and increasingly beginning to assume authentic content to be AI generated, or question whether anything they see is real at all. This deterioration of trust in political discourse posed significant risks during the election period, when time-sensitive events created fertile ground for the spread of AI-generated content, and voters had to make critical decisions based on increasingly unstable information foundations. Without improved media literacy education and social media platform safety measures, this crisis of authenticity threatens to undermine not just election cycles, but the future of democratic discourse itself. 

Investigation: Political violence, harassment, intimidation & threats during Ireland’s 2024 general election campaign

30 November 2024 A joint investigation by the Institute for Strategic Dialogue (ISD) and Hope and Courage Collective (H&CC) documented 55 incidents encompassing politically motivated violence, threats, harassment, targeting and smears across a spectrum of activity in the five weeks leading up to the Irish General Election on 29 November. These included 4 incidents of offline violence; 13 incidents related ...