Content glorifying Hitler surges online amid growing historical revisionism
27 September 2024
By: ISD Analysts, with research and insights from George Duarte, Deputy Director for Research & AI Strategy at Drive Agency
Overview
On 2 September, right-wing pundit Tucker Carlson hosted historian and podcaster Darryl Cooper for an interview. During their conversation, Cooper made debunked and revisionist statements about the Holocaust and World War II. While the comments provoked significant online controversy, more alarmingly they generated a substantial amount of content glorifying Hitler and promoting him as a “peacemaker.” This content reached tens of millions on social media, exacerbating and amplifying an already burgeoning trend of pro-Hitler content emerging online. This trend may in part be due to the rise in general antisemitism since the 7 October Hamas attacks and ensuing conflict.
While major social media platforms have long struggled to control content espousing neo-Nazi ideologies, content that is specifically pro-Hitler and offering a revisionist history of the Holocaust seems to be a more distinct and growing trend. In its analysis, ISD observed this content and associated narratives across a range of mainstream and fringe platforms, often receiving millions of views and causing social media platform users to question known facts about the Holocaust and Hitler’s role in it.
Key Findings
- In assessing the rise of pro-Hitler, neo-Nazi content, ISD analysts found that content glorifying Hitler, questioning facts about Hitler’s intent, or including English audio versions of his speeches, received over 50 million views across X, YouTube, TikTok and Instagram throughout 2024.
- Across the platforms studied, this content was more easily accessible and had significantly higher reach on X than TikTok, Instagram or YouTube.
- The content’s audio predominantly featured Hitler’s speeches translated into English via AI. On X, the visuals were primarily of Hitler; on TikTok the videos often included imagery of young, western influencers, landscape images and religious symbols.
- On TikTok and X, when users asked the post author for the full-length audio within the comments section, they were sometimes directed to videos on YouTube, which the post authors claimed were the sources of the audio. Just seven AI-generated videos of Hitler speeches on YouTube, posted in 2024, had received 6.9 million views by the end of data collection.
- Since 13 August, posts that appeared to glorify or support Hitler, or included Nazi iconography, received over 24.8 million views across X, TikTok and Instagram.
- The content seemed to be more readily available and attained higher reach on X, than other platforms assessed, with just 11 posts garnering 11.2 million views in a one-week period. ISD analysts observed that after engaging even with an insignificant amount of pro-Hitler content, the X algorithm quickly adjusted to proactively serve this type of content. In one test, 10 of the first 19 posts (52%) served in an account’s For You feed featured content including Hitler, praise of Nazis or overt antisemitism (such as claiming anyone using an Israeli flag image or emoji is a pedophile).
Tucker Carlson’s Interview With Darryl Cooper Amplifies Revisionist History
The Darryl Cooper interview, aired on Carlson’s show on X (formerly Twitter) on 2 September, featured Cooper making several debunked claims, including the assertion the Holocaust was not an intentional act of genocide but rather an unintended consequence of logistical failures by Nazi Germany. Cooper also suggested that the millions who died in concentration camps “ended up dead” because the Nazis were unprepared to manage large numbers of prisoners, framing the mass extermination as a response to food shortages.
In his interview, Cooper also painted Winston Churchill as the primary antagonist in World War II. He argued that Churchill’s refusal to engage with peace proposals from Nazi Germany escalated the conflict, suggesting that British bombings during the war amounted to large-scale terrorist attacks. These statements, which downplay Nazi atrocities and shift blame onto Allied leaders like Churchill, have been widely criticized for promoting historical revisionism and attempting to present Hitler as merely a leader seeking peace.
Highly followed public figures such as Elon Musk, the owner of X, initially promoted the interview, though Musk later deleted his post stating the interview was “worth watching” amidst mounting criticism. Despite the backlash, Carlson’s interview with Cooper gained significant traction, highlighting how the controversy itself may have amplified his reach.
Revisionist History and the “Deep State Education System”
Reminiscent of the recent resurrection of Osama Bin Laden’s propaganda justifying the 11 September attacks, platform users began questioning the known facts of WWII and leaning into conspiratorial narratives questioning America’s school curriculum on Adolf Hitler and his role in the Holocaust.
On X, white nationalist Nick Fuentes, known as the leader of the “Groypers” stated, “people are still afraid to question the big narrative regarding Hitler because they know they will be punished with a vengeance by the Jews.” The post received 1.4 million views.
Another X user, whose post garnered 73,000 views stated: “Fascinating to watch so many people who already know they’ve been lied to about RECENT events suddenly realize they’ve also been lied to about PAST events (history). Did you really think the history they taught you in government-run schools was going to be any less fake than the fake news they are hammering you with in the PRESENT? Almost everything you’ve been taught about history is a lie.”
This did not start with Carlson’s interview with Cooper. Earlier this year, well before the interview, several prominent figures raised narratives questioning history’s view of Adolf Hitler. In July 2024, Candace Owens, a fellow right-wing commentator with a history of making comments critics describe as antisemitic, stated, when discussing Hitler and WWII, that “the allies ethnically cleansed 12 million Germans” and “he (Hitler) is the focus of all our youth indoctrination … they’ve turned him into Lord Voldemort.” In addition, a 2015 clip of Israeli Prime Minister Benjamin Netanyahu began recirculating. In the clip, Netanyahu says Hitler did not intend to kill Jews and was encouraged to do so by then Mufti of Jerusalem, Haj Amin al-Husseini – who, said Netanyahu, suggested to Hitler that instead of expelling the Jews he “Burn them.” In August alone, two posts featuring these clips received 5.2 million views.
Narratives Aligned with Pro-Kremlin Propaganda
Many of the narratives spread online about Hitler also aligned with another Carlson podcast guest, Russian President Vladimir Putin. In Carlson’s interview with Putin in February of this year, Putin stated: “before the Second World War, when Poland collaborated with Germany, it rejected Hitler’s demands but nevertheless participated with Hitler in the division of Czechoslovakia, but since it did not give up the Danzig corridor, the Poles nevertheless forced him – they got carried away and forced Hitler – to start the Second World War against them first. Why did the war start on September 1, 1939, precisely against Poland? Because it turned out to be uncooperative. Hitler had no choice but to implement his plans, starting specifically with Poland.” This statement is refuted by historical evidence.
Pro-Hitler Content and Accounts Gain Significant Reach On X
Analysts were able to easily detect content on X that was likely violative of its policies, with just 11 posts receiving more than 11.2 million views over a one-week period. This content was more readily available than on other platforms via basic searches.
Content identified by ISD since August included a video of Hitler with children, referring to Nazi Germany as a “magical place,” a video compilation of “Sieg Heil” salutes to Hitler where the post’s author asks for platform users to “Comment ‘sieg heil’ below,” an image of Hitler clutching his heart with the post reading “Hitler seeing more and more people see he was right” and an image of Hitler below the text “the answer to 2024 is 1934.”
Another account which uses Adolf Hitler’s name within the username, as well as the term “votehitler2024” in its bio, generated a post that indicated that the account had been reported by multiple X users and yet not suspended.
In a post that referenced the “Jewish problem,” X users added a community note that stated: “’The Jewish problem’ is a euphemism for the racist, genocidal Nazi theory that Jews are a corrupt and inferior race that is to blame on all of Germany’s (and humanity’s) problems. Their solution to this problem was first ethnic cleansing, and then the genocide of all Jews“.
The content was not only served when searching for clearly hateful keywords such as “Heil Hitler,” but also proactively served in X’s “For You” feed.
Much of this content appears to violate X’s policies, specifically the hateful imagery prohibition within X’s Hateful Conduct policy which states: “We consider hateful imagery to be logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, gender identity or ethnicity/national origin. Some examples of hateful imagery include but are not limited to: symbols historically associated with hate groups, e.g., the Nazi swastika.”
While this content existed prior to the Cooper interview, and gained significant traction in the immediate days following, it persists. On X, three posts from the weekend of 20 September that included images of Hitler alongside Nazi iconography amassed 676,000 views. One post, which featured an image of Hitler and a swastika, read: “There’s a reason why they didn’t translate Hitler’s speeches. They didn’t want you to see that he was right.”
AI Translated or Edited Hitler Speeches Play an Outsize Role
Across X, TikTok, YouTube and Instagram, ISD analysts identified hundreds of videos featuring a wide range of Hitler’s speeches, translated into English using artificial intelligence. These videos, in some cases posted as early as January 2024, have proliferated throughout the year, particularly since September, garnering millions of views.
On TikTok, most of the identified videos included 20-30-second audio clips of Hitler’s speeches in English along with background music. The videos often included antisemitic, anti-Muslim, anti-immigrant and white supremacist messages along with vague visuals that could circumvent content moderation efforts including landscape images, religious symbols or war footage. One of the videos featured a silhouette of Hitler with the message: “Growing up is realizing who the villain really was” (reminiscent of reactions to the Bin Laden “letter to America”). This video garnered 548K views before being removed. Analysts also found several versions of another video featuring a Hitler speech, paired with the message “worst combo ever,” followed by symbols representative of Europe and Islam. Although TikTok has now removed all the identified videos from the platform, many received a significant number of views prior to removal.
Similar short audio clips were also available on Instagram, with one Reel (Instagram’s short-format videos) garnering 229K likes and 4.9M views. Analysts identified over 140 Reels featuring AI-translated Hitler speeches, some posted as early as April. Notably, even posts that explicitly stated in the caption that they featured Hitler’s speeches remained available without any content warnings. One of the identified Reels was recycled from a TikTok video that has since been removed, underscoring the cross-platform dissemination of this content.
On X, posts featuring AI-generated Hitler speeches were easily searchable and accessible, often including real footage of the speeches. One post from February 2024, featuring real footage and AI-translated audio of Hitler’s 1935 speech at Krupp Factory, garnered 7.9M views and remains available on the platform. Some of the comments on this post praise Hitler as a leader and ask for more translations to be made available in order to “correct” history.
Analysts identified several instances of X and TikTok accounts directing users to YouTube channels where longer-form versions of these speeches were available. Although some of the YouTube videos included disclaimers, they were often shared across networks of Hitler-glorifying accounts. One of the identified videos, featuring a 1922 Hitler speech, garnered 1.2M views and over 16K comments. Despite the video including an educational disclaimer, over half of the comments seemed to praise the speech. Some users claimed that he “was right” and “actually a good guy.” Others promoted conspiratorial claims about why the English translation was not widely available, with one user commenting “this goes to show that censorship is the primary weapon of those who wish to take on anyone as powerful as he was.” Analysts also found one tutorial video demonstrating how to use AI-powered voice cloning software to recreate Hitler’s voice in English.
Nazi and Pro-Hitler Content is Easily Accessible
When assessing the accessibility of pro-Hitler content across mainstream platforms, there is a clear divergence in how X and YouTube handle search results and discoverability. In the context of key search terms likely to lead to violative content such as “Hitler,” “Sieg heil,” “heil Hitler” and “my fuhrer,” TikTok most consistently limited discoverability. On TikTok, three of the four search terms not only were blocked but also redirected to external educational content on the Holocaust. For the term that was not blocked – “Hitler” – a banner still appeared reminding platform users to “consult trusted sources to prevent the spread of hate and misinformation.” TikTok then provided a link to external educational sources for the blocked terms (see figure 3). Instagram, while not outright blocking any of the terms, provided content warning labels for two of them which state “these keywords may be associated with dangerous groups and individuals” (see figure 4).
X and YouTube did not block searches, provide media literacy warnings or redirect to external resources. Where the two platforms diverged was the type of content served when searching for these terms, with most of the YouTube content either being news coverage or content denouncing Hitler.
X Algorithms Quickly Adjust
After engaging with multiple pro-Nazi posts (via expanding or bookmarking the post rather than liking, commenting or reposting it), ISD analysts observed the algorithm quickly adjust to serve more of the same. For example, in the For You feed of one account used for monitoring (that did not follow any accounts identified as containing Hitler related content), 10 of the first 19 posts (52%) served featured Hitler, praised Nazis or were clearly antisemitic (such as claiming anyone using an Israeli flag image or emoji is a pedophile).
The rapidity of the algorithm change, particularly when content included hate iconography that should be easily identifiable by machine moderation (such as swastikas), is a stark example of the ability of algorithms to almost immediately create a filter bubble of harmful content, in violation of the platform’s own community guidelines when policies are not being consistently enforced.
It Remains Unclear Why Platform Policies Are Not Being Consistently Enforced
Based on the content identified by ISD across platforms, and the reach of that content, there is clearly a disconnect between the platforms’ stated policies and how they are enforced. X’s policy, for example, is to remove any accounts that glorify mass casualty perpetrators and remove any content that contains hateful symbols or content that includes manifestos. Similarly, both TikTok and YouTube prohibit posts that promote harmful conspiracy theories and promote violent or hateful ideologies. Notably, YouTube uses the promotion of Nazism as an example of content that is explicitly prohibited on the platform. Yet content that appears to glorify Hitler or includes speeches that speak to his ideology and motivations are still readily available to users.
This persistence of pro-Hitler content may be indicative of detection failures within moderation protocols, unclear definitions within policies (such as the definition of a manifesto), or the misapplication of exceptions social media platforms put in place for content deemed to be ‘in the public interest.’ All platforms assessed include policy carve-outs for educational or newsworthy content, yet these definitions are often vague and subjective, with a lack of transparency as to when they have been applied.
Conclusion
The rise of pro-Hitler content across social media platforms is not an isolated phenomenon but part of a broader, alarming pattern in which revisionist historical narratives gain prominence online, particularly aimed at or consumed by younger audiences. In these instances, platforms often struggle to enforce their own policies, with the boundaries of extremist propaganda and “educational” content becoming blurred. This growing trend of pro-Hitler content once again underscores the urgent need for clearer platform policies, consistent policy enforcement and enhanced media literacy education, particularly as AI-generated content continues to proliferate online.