By: ISD Global, Krystalle Pinilla
In a time when antisemites are able to network freely online, hiding behind coded language and phrases with double meanings, research has shown that hate speech against Jewish people has not only increased in volume over the past decade, but also in severity.
This Dispatch provides an overview of the online antisemitism threat landscape, which has increased in recent years, particularly since the pandemic. It is based on our recently released publication ‘Online Antisemitism: A Toolkit for Civil Society’, which aims to build capacity among civil society to confront online antisemitism, and was jointly authored by ISD and B’nai B’rith International, in partnership with UNESCO.
The COVID-19 pandemic was the first global crisis with real-time access to the internet. As we came together on Zoom calls and shared videos honouring healthcare workers, the “Infodemic” – as labelled by the World Health Organisation – simultaneously gave rise to the spread of conspiracies, misinformation and disinformation.
Many COVID-19 related conspiracies built on age-old antisemitic narratives, portraying Jews as financial beneficiaries of the pandemic, wanting to dominate the world, or looking to poison the global population through vaccines or virus manufacturing. These myths spread online at an unprecedented rate: during the year after the introduction of lockdown measures in spring 2020, ISD found a 7-fold and 13-fold increase in antisemitic comments across French and German channels respectively. However, this trend wasn’t only limited to French or German-language content; Jews were being depicted in a variety of political-contexts and languages as the creators, spreaders and beneficiaries of the pandemic.
The digital space has long been a vehicle for antisemitism—especially in the European context. A survey in 2018 by the Fundamental Rights Agency of the European Union found that among European Jewish respondents, 89% considered online antisemitism to be a problem in their country. Another study conducted in 2019 estimated that more than 10% of all tweets about Jews and Israel were antisemitic.
Online antisemitism is a cross-platform issue with real-world implications, affecting both mainstream social media and “alt-tech” platforms, which include Telegram, Bitchute, 4chan, Gab and Parler. The real-world repercussions of this online threat are evident in both the short- and long-term, as physical security around synagogues or other Jewish spaces come into question, as seen in major attacks over the last few years. With some attacks bringing into light the overlapping nature of this threat, as seen with the Halle synagogue shooting in 2019. Stephen Balliet targeted a synagogue on Yom Kippur—the most sacred day in Judaism—and then went to a Kebab shop looking to attack Muslims, 35 minutes of which was livestreamed on Twitch, shared on right-wing extremist Telegram channels and seen by about 2,200 people before being removed. Balliet wrote in his documents that he first planned to attack a mosque or Antifa centre, but then opted for Jewish people “because he considered Jews to be the root of all evil”. The 28-year-old man showed clear signs of the intersectional dangers of antisemitism, antimuslim and anti-immigration hostilities (common to the right-wing extremist online subculture, and reflected in the ‘Great Replacement’ conspiracy”).
With the rise of antisemitic content online, social media platforms have fallen short in enforcing their own policies; another study reported that major platforms (Facebook, Instagram, TikTok, Twitter, YouTube) failed to remove 84% of posts containing antisemitic hate that had been flagged to them. Facebook was found to be the worst performing, failing to respond to 89% of reported content despite having enacted policy changes saying they would do so.
Antisemitic content online is expressed both overtly and transparently. More often than not, it is expressed implicitly through coded language or imagery which requires significant background knowledge to understand. This allows the user posting the content the ability to deny knowledge of its true meaning (known as “plausible deniability”). Antisemitism online can be:
- Signalled via linguistic or numerical codes (e.g. 6MWE = six million weren’t enough; 88 = Heil Hitler);
- Expressed through implicit statements (such as claims that billionaire George Soros is a “globalist puppet master” aiming to replace “native” Europeans with non-white immigrants, without explicitly mentioning his Jewish identity);
- Secret symbols (the triple parentheses identifying individuals or organisations as real or imagined Jews, e.g. by claiming that “‘(((ISD))) is an anti-white organisation”.
Types of Antisemitism
Antisemitism has come in many forms throughout history. The International Holocaust Remembrance Alliance (IHRA) defines antisemitism as “a certain perception of Jews, which may be expressed as hatred towards Jews. Rhetorical and physical manifestations of antisemitism are directed towards Jewish or non-Jewish individuals and/or their property, towards Jewish community institutions and religious facilities.”
This widely accepted definition, adopted by 37 countries and endorsed by the EU Parliament, Commission and Council, also includes a list of 11 examples of contemporary antisemitism. These include, “calls for violence against Jews, ‘classical’ antisemitic tropes (e.g. myths about a global Jewish conspiracy or blood libel), Holocaust denial and Israel-related antisemitism”.
Antisemitism can be a common factor in a broad range of extremist movements, with antisemitic narratives retaining similarities across the ideological spectrum (e.g. extreme left- and right-wing movements and violent extremist groups).
- The far-right is often responsible for the most visible antisemitic threats online. Antisemitism is a key element of these movements, which draws on the entire spectrum of antisemitic content from calls for violence, classical stereotypes and conspiracies about Jewish supremacy, and Holocaust denial and distortion. Far-right engagement has also been supercharged by COVID-19, which drove the proliferation of antisemitic conspiracy theories around the pandemic.
- Antisemitic content is also prominent among violent extremist groups such as ISIS, and younger Salafi-jihadi extremist communities, who often combine antisemitic ideologies with different elements of gaming, youth and online subcultures—borrowing from antisemitic tropes and references from the far right. 
- Left-wing antisemitism often manifests itself through conspiracy myths alleging the “Jews” or “Zionists” are in control of the media, economy, government and other institutions for malicious purposes.
- QAnon conspiracists, who claim that a network of liberal elites is trafficking children to sexually abuse them and harvest “rejuvenation chemicals” from their bodies, often repackage antisemitic imagery and tropes related to the blood libel myth.
Antisemitic attitudes also exist beyond overt extremists and at a wider scale. A number of high–profile incidents have proved it to be a broader social phenomenon. Antisemitic narratives have been adapted to fit the contemporary context online by drawing on long-standing ideological tropes about the world supposedly being run by Jewish “elites”. This has manifested in COVID-19 related accusations of a “Jewish plot”, involving billionaire George Soros or the Jewish Rothschild family. Soros has been accused by a range of conspiracists of masterminding the European refugee crisis in 2015, and of funding groups like Antifa or the Black Lives Matter movement after the murder of George Floyd in 2020. The Rothschild family name on the other hand has historically been revived for multiple conspiracies, and with the rise of memetic culture it is used as a coded way of directing hate towards Jews in general. In the context of COVID-19, factcheckers debunked allegations against Rothschild family members, including one about them having patented testing kits in 2015 and 2017.
Holocaust denial and distortion is easily found across a range of social media platforms. According to an upcoming UNESCO report, 17% of content related to the Holocaust on TikTok either denied or distorted the Holocaust. In part, this is due to Holocaust denial remaining within freedom of speech laws in certain countries, in addition to most cases not being criminally prosecuted in countries where legislation prohibiting it does exist.
On Facebook, Holocaust denial didn’t fall under violation of community guidelines until 2020, when reports emerged showing the platform’s algorithms were actively recommending Holocaust denial to users. Facebook and TikTok have since partnered with UNESCO and the World Jewish Congress to redirect users to verified and accurate information about the Holocaust on the website AboutHolocaust.Org. However— as previously mentioned— Facebook was found to be one of the worst offenders according to a 2021 study, failing to act on 89% of reported antisemitic content.
One related trend during the COVID-19 pandemic has been the use of language and symbols that equate the treatment of Jews under Nazi rule with that of opponents to lockdown measures, vaccination programmes and other public health mandates intended to curb the virus. Protestors in various geographies including Australia, Germany and the US have taken to wearing Yellow Stars to identify themselves as unvaccinated, implying state authorities are persecuting them for refusing to wear a mask, socially distance or even disclose their vaccine status to employers. While it is not always clear whether this is a conscious provocation or caused by historical ignorance, these inaccurate analogies distort the history of the Holocaust.
As we continue to venture down the road of interconnectedness and globalisation, antisemitic content will likely continue spreading in new ways. An example of this is through podcasts, which could possibly be a key vector for Holocaust denial and far-right antisemitic conspiracy theories.
Unless platforms raise the bar on restricted language that pushes the boundaries of “free speech” and better enforce their own rules and community guidelines, online users will continue to be exposed to antisemitic content. Educating social media users on how to identify this content may be one of the better tools we have against antisemitism.
Recognising the enormous capacity for positive action the digital space offers, the Toolkit created by ISD and B’nai B’rith International, in partnership with UNESCO, aims to consolidate knowledge, providing a wide range of policy and community avenues for moving forward. It looks to build literacy among Jewish professionals, community members and allies from across society, better addressing antisemitic content from a policy perspective, and at a community–level.
1 – ISD defines Salafi-jihadism as the implementation of puritanical interpretations of Islamic governance achieved specifically through a violent interpretation of jihad.