Livestream content violations across platforms fly under the radar ahead of the US election

18 October 2024

This Dispatch is part of a series assessing platform enforcement of guidelines ahead of the 2024 US elections. Our Election Scorecard provides a comprehensive analysis of platform community standards and enforcement. You can find the report and a platform-by-platform comparative analysis available on our website


As the US presidential election draws close, ISD reviewed livestream content featuring discussions about the election, US politics and campaign issues on Facebook, YouTube, TikTok and X (formerly Twitter). We found that these social media platforms are failing to enforce community guidelines related to election integrity and political misinformation, hate and extremism.

Livestreaming has become an increasingly effective tool for those spreading hate, misinformation and extremism; this presents significant challenges for moderation due to the nature of the medium. Platforms typically rely on artificial intelligence (AI), algorithms and user reports to flag inappropriate content. These measures often fail to prevent the rapid spread of harmful material. For example, in March 2019 the Christchurch mass shooter livestreamed the attack, which killed 51, on Facebook. Days after the attack, the platform confirmed its artificial systems failed to automatically detect the streamed shooting.

ISD identified and reviewed 26 pieces of livestream content that were judged to be violating community guidelines. This violative content included videos where users repeatedly pushed debunked election denialism, referred to Jewish people as a “parasitical class,” and called for members of the Senate and Congress to be “strung up.”

To test platform responses to such violations, ISD reported all livestreams and accounts to their respective platforms.

All but one of the livestreams/accounts analyzed are still available and live as of October 15. Our work reflects that platforms still struggle to moderate livestream content and enforce their community guidelines on misinformation, hate speech and extremism, even as the election nears.

Key findings

Livestream content on major social media platforms has become a key engagement tool, enabling real-time broadcasting of events, discussions and gaming. Livestreaming surged during the pandemic, including for political discussions, particularly during the 2020 election cycle. As the capability and popularity of livestreaming options on platforms have grown, so too have their uses for discussing and creating content related to US politics, presidential elections and topical issues by many in the country.

Livestream content on YouTube and Meta’s Facebook and Instagram platforms typically resembles traditional TV programs where hosts discuss topics and interact with viewers. YouTube additionally enables monetization through its ‘Super Chats’ feature, a feature that has enabled users and YouTube to profit from comments that promote violence, conspiracy theories, misinformation and hate, per previous ISD research. 

TikTok’s approach to livestreaming differs, featuring interactive sessions with multiple users, and offers monetization through viewer-purchased virtual gifts. X provides audio-only livestreams via its Spaces feature, with no visual elements or direct monetization.

Platform community guidelines extend to livestream content, but moderating livestreams for policy violations presents significant challenges and preemptive content moderation is difficult. ISD’s scorecard assessing platform policies found their guidelines to be vague, enabling ambiguity and inconsistencies in content moderation. This is reflected in livestream content moderation too. Platforms typically rely on AI, algorithms and user reports to flag inappropriate content, but these measures often fail to prevent the rapid spread of harmful material.  

  • Platforms are failing to enforce their own content policies on livestreams. Just 1 out of 26 livestreams which ISD reported were removed.  
  • The content which was reported appeared violative of platform guidance and included false election claims, hate speech and extremism. 
  • In some cases, platform’s policies themselves are inconsistent – for example, Facebook’s guidelines simultaneously exclude politicians from fact-checking while stating that the platform will demote content that is debunked repeatedly by fact-checkers. 
  • There are significant variations between platforms in terms of the amount of information they provide when content is reported, ranging from individual notifications for livestreams to practically no notifications at all. 
  • More tools are needed to support researchers for analyzing and reporting livestreamed content which violating community guidelines.
     

ISD carried out a quantitative analysis of 26 separate livestreams on Facebook, YouTube, X and TikTok. Analysts used a variety of keywords related to the presidential election, US politics and campaign issues to manually search for, identify and review relevant livestream content on each platform.

The analysis found persistent community guideline violations related to election integrity and political misinformation (15 livestreams), hate speech (9 livestreams) and extremism (2 livestreams). ISD analysts reported all these videos to the relevant platforms to test their violation enforcement processes. This section details the content identified, policies violated and results from the enforcement tests.
 

Table 1: Livestream content identified and categorized according to guideline violation.

Election Integrity  

The analysis found that 15 livestreams included likely instances of election and civic integrity policy violations. Eight videos featured prominent media and political figures including former president Donald Trump, Russell Brand and Mike Lindell making disproven claims about fraud and rigged votes in the 2020 election. The repetition of false claims about the last election was the most common form of content identified in this review. 

In two videos (from Facebook and YouTube), Trump asserted that the 2020 election was “rigged and stolen.” Lindell’s comments in two separate videos also propagate the myth of a compromised election system, branding the US as the “cheating capital of the world.” 

As noted in ISD’s Election Preparedness report, platforms policy guidelines on false claims regarding the validity of the 2020 election results varied significantly, with YouTube, X and Meta taking a less restrictive approach than TikTok and Snap. Yet allowing this content to proliferate may still conflict with their policies as written. Meta states “we remove misinformation that is likely to directly contribute to a risk of interference with people’s ability to participate in those processes.” False claims that the last presidential election was “rigged” could reasonably be judged to lead voters to question the integrity of the upcoming election, potentially suppressing turnout.  

Similarly, YouTube states that “false claims that could materially discourage voting” are prohibited, yet seemingly in contradiction of this policy, YouTube also announced in June 2023 that they would “stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections.”

Facebook excludes “posts and ads from politicians” from fact-checking, including “the words that a politician says” that are “labelled as created by, on behalf of or directly quoting the politician or their campaign.” This effectively extends protection platform policies regarding misinformation to content quoting Trump. 

Image 1: Trump campaign rally, broadcast on Right Side Broadcasting Network’s YouTube channel on September 18, 2024.

Facebook explains this policy by saying that “in mature democracies … political speech is the most scrutinized speech that there is.” Yet, the channels that broadcast the livestreams on Facebook and YouTube are not media organizations that have a history of scrutinizing Donald Trump. They are Newsmax and Right Side Broadcasting Network, both media companies that have a long track record of promoting and repeating, uncritically, many of Trump’s most egregious lies regarding election integrity.

Additionally, false claims alleging the last election was “rigged” have been the subject of countless fact checks since 2020. It is undoubtedly one of the most popular topics of fact checks in the past four years. Despite this, neither Facebook or YouTube include any fact checks on the livestreams featuring Trump or include any information banners indicating where users can access information more about the upcoming election. It can be safely argued that there is no scrutiny of political speech here, just explicit promotion of false claims about election integrity in the US.

This policy stance appears to contradict Facebook’s statement that it will demote content shared by politicians if it has been debunked by fact-checkers. This creates a lack of clarity as to whether Trump’s often repeated and often debunked claims of election rigging in 2020 would be considered policy violations.

ISD reported these videos under the “False information → politics” and “misinformation” criteria on Facebook and YouTube respectively. No action appears to have been taken against either the videos or the accounts. 

We found similar election denialist content from former Fox News host Tucker Carlson on a political discussion livestream on YouTube; in the video, he stated that people in the US “were so shocked to see their election stolen [in 2020], which obviously it was.” Carlson, like Trump and Lindell, has a well-publicized history of sharing false claims about voter fraud in the last US presidential election.

YouTube’s guidelines on election integrity prohibit “false claims that widespread fraud … occurred in certain past elections,” but this policy only applies to past elections in Germany and Brazil, according to YouTube’s webpage listing the policy. As noted above, YouTube policies regarding election integrity appear contradictory.

Tucker Carlson’s remarks clearly contain “false claims” about “past elections.” ISD reported it for “misinformation.” The review is currently listed as “live,” which the platform says means it has not been reviewed yet or reviewers decided it does not violate the platform’s community guidelines. It is not possible to find out further details about which of the two is the current status.

Claims about the upcoming 2024 election were also prevalent on social media platforms. The host of one live Facebook video claimed that Amazon have been caught “rigging [the] 2024 election AGAINST Trump.”  On a YouTube livestream, a host claimed that “Peter Thiel is correct,” in saying that “if the election is close [Vice-President and Democratic nominee] Kamala [Harris] will win because [Democrats] will cheat.”  

TikTok livestreams included wide-ranging discussions about the election and politics in the US. Among them were numerous comments and claims targeted at Harris, which stated the election would be rigged in some way in her favor. In one video, a speaker claimed Harris “is promoting families to not exist anymore”; a comment in a separate livestream video said that the “[Democratic Party] are setting things up so that after the election, they’re going to use mechanisms to rob us of our sovereignty.”  

Other remarks cited beliefs that originated with the QAnon conspiracy theory. One person claimed that “white hats,” a term used by QAnon followers to describe supporters of President Trump operating in the government, are in control regarding the outcome of the election. This comment was preceded by another saying that if Trump did not win the election, there might be “no other option than to overthrow the government.”  

This content clearly violates TikTok’s guidelines on Civic and Election Integrity, which prohibit misinformation that may “disrupt the peaceful transfer of power or lead to off-platform violence.” ISD reported the account hosting this livestream on TikTok under the platform’s “Report Account →  Posting inappropriate Content →  Misinformation →  Election misinformation” criteria. TikTok reviewed the account and decided there was “no violation.”

Hate Speech 

The analysis discovered instances of hate speech in 9 livestreams, the majority (6 livestreams) of which was found on X. Speakers on the platform’s Spaces feature used for livestream audio discussions frequently expressed derogatory comments towards racial and ethnic groups, or made comments that directly attacked people with dehumanizing speech, contempt or disgust.

The bulk of hate speech identified was targeted at Jewish people and communities. This originated during discussions about politics, race and immigration and included references to long-standing antisemitic conspiracy theories. In one livestream discussion, a speaker said “the Jews are the ones controlling the levers of people in the institutions.”

In another, a speaker said “the thing that sets the Jew apart from the other races is that they don’t have their own land so they’ve just been this parasitical class.” Other livestreams contained even more pointed attacks on Jewish, with a speaker stating “trying to convince a Jew they’re evil is a fool’s errand … there’s no saving or fixing a Jew.”

These comments clearly violate X’s guidelines on Abuse/Harassment and Hateful conduct. ISD reported these livestreams under the platform’s “abusive behavior” criteria but have yet to receive any notification regarding these reports. No action appears to have been taken against either the livestreams or the accounts that broadcast this content.

Hateful speech was also discovered on TikTok livestream content, with other targets including leaders such as Harris. In one livestream video on TikTok discussing the upcoming election, one speaker said “I’m just gonna do what Kamala did” before they made slurping sounds. The implication that the vice-president used sexual acts to advance her career in law and politics is part of a long-running and misleading slur against her.  

ISD reported this livestream under the platform’s “Hateful behavior → attacks/slurs” criteria. The video and account were removed by TikTok upon review, the only instance of platform action documented during this analysis.

Extremism

The analysis discovered instances of extremist language and calls for violence in two livestreams. In one audio discussion on X, a speaker praised the Nazi regime and discussed National Socialism in a manner that distorted historical facts and promoted extremist views. They said “national socialism was never about racism or racialism… it was about freedom … from Jewish banks,” before praising divisions of the nazi schutzstaffel paramilitary organization and adding, “we wanted peace for the entire human community” and were interested in “ending wars.”

The other instance occurred on TikTok. Here, during a livestream video discussion between users, speakers welcomed the prospect of violence if Trump is injured or killed in another assassination attempt. One speaker said “we’ve now had two attempts on his life. If something happens to this man, they haven’t seen anything yet. It’s gonna make history if something happens to this man,” which prompted a response from another user who said “I don’t know if we can use the [they spell out, letter by letter] … W A R word. C I V I L. It’s not going to be a good day.”

ISD reported this livestream under the platform’s “Violent extremism → Violent threat or incitement” criteria. At the time of writing, almost two weeks after the livestream was flagged, TikTok deemed the report still under review. The account is still live on the platform. 

Platform responses

Although platforms were largely uniform in failing to remove this content, ISD found significant differences in the amount of information they provided on the process. 

  • TikTok was largely responsive in the user reports, sending individual notifications for each livestream/account reported.  
  • YouTube’s status report page included information for each livestream. However, whether the content was still under review or was not deemed to have violated their guidelines was unclear.  
  • Facebook has a similar report status page though the livestreams reported did not appear on this page. At the time of writing, ISD has received no information about the status of its reports, whether the livestreams were reviewed or whether any action was taken. 
  • X had no page with any information about reports. ISD only received a notification after reporting an account that had hosted a livestream which is currently under review. There were no notifications related to the livestreams which we flagged. 

Conclusion  

There are a variety of challenges to researching livestream content. Even in this small sample, each livestream is on average over one hour long. Sourcing, analyzing and archiving such content is time consuming, particularly with the qualitative analysis which was the preferred method for this analysis. 

The ephemerality of material can also prove a challenge: on TikTok, livestream content disappears after the broadcast ends, making archiving or reporting such content impossible then (the other platforms offer playback options). The length of livestream content also provides obvious difficulties for analysis.

As livestream content becomes ever more important to political discussion, platforms should prioritize aiding discovery and access for researchers and platform users. Adding transcriptions, as YouTube does on its video content, is a helpful way to aid content analysis. Facebook, YouTube and TikTok all offer standalone portals where users can check the status of content they report.

As evidenced in this report though, the level of detail disclosed to users is quite limited and some of the terms used lack clarity. Addressing these issues would be a welcome step from the three platforms, while it would similarly be helpful for X to create its own portal.

At present, as this analysis has illustrated, the enforcement of community guidelines and protection of users from election misinformation, hate speech and extremist content on livestreams is falling through the cracks.