30 March 2023
New research from CASM Technology and ISD has found a major and sustained spike in antisemitic posts on Twitter since the company’s takeover by Elon Musk on October 27, 2022. Powered by the award-winning digital analysis technology Beam – and based on a powerful hate speech detection methodology combining over twenty leading machine-learning models – researchers found that the volume of English-language antisemitic tweets more than doubled in the period following Musk’s takeover.
In total, analysts detected 325,739 English-language antisemitic tweets in the 9 months from June 2022 to February 2023, with the weekly average number of antisemitic tweets increasing by 106% (from 6,204 to 12,762), when comparing the three months before and after Musk’s acquisition.
Whilst preliminary studies conducted immediately after the takeover pointed to spikes in specific hateful slurs, this research moves beyond keyword-based analysis to demonstrate the broader and longer-term impact that platforms deprioritising content moderation can have on the spread of online hate. Our approach draws on a suite of natural language processing classifiers trained to identify antisemitic content in line with the IHRA definition, allowing us to identify messages at scale which can plausibly be categorized as hate speech.
The key findings from the research include:
- The volume of antisemitic tweets more than doubled in the three-month period after Musk’s takeover, compared to the period before.
- The rate of creation of antisemitic accounts more than tripled in the period after Musk’s takeover.
- The proportion of antisemitic content removed by Twitter appears to have increased in the period since the takeover, with 12% of antisemitic tweets subsequently unavailable for collection, compared to 6% before the takeover. However this potential increase in removal rate has not kept pace with the increase in overall antisemitic content, with the result that hate speech remains more accessible on the platform than before Musk’s acquisition.
- Despite Musk’s promises to ‘max-deboost and demonetize’ hateful content, engagement with antisemitic content on Twitter remained steady.
The Musk effect: instant uptick and long-term impact
Twitter’s tumultuous takeover saw dramatic changes in the platform’s approach to tackling online harms. Within days, fundamental changes were made to policies and enforcement, including the reinstatement of accounts previously permanently banned, the dissolution of Twitter’s independent Trust and Safety Council responsible for advising on decisions around tackling harmful activity on the platform, and the laying off of over half of Twitter’s staff, including many of those responsible for content moderation, online safety and conversational health.
The effect of these changes were reflected in the data analysis outlined in this report, which demonstrates a major increase in the number of antisemitic Tweets posted in the immediate aftermath of the takeover, which has crucially remained at an elevated level in subsequent months.
Figure 1: Volume of potentially antisemitic tweets over time, June 2022 – February 2023
We also identified a surge in the creation of new accounts posting hate speech which correlated with Musk’s takeover. In total 3,855 accounts which posted at least one antisemitic Tweet were created between October 27 and November 6. This represents more than triple the rate of potentially hateful account creation for the equivalent period prior to the takeover. Closer assessment of these accounts showed that many displayed characteristics of overt racism and ethnonationalism. This correlates with a rise in coordinated harassment and even pro-ISIS activity on the platform around Musk’s takeover, suggesting that harmful online communities felt empowered by Musk’s widely publicized shots at Twitter’s management.
Despite Musk’s claims that “hate Tweets will be max deboosted & demonetized” – indicating that they will not be algorithmically recommended to users on their news feeds (deboosted) and will not be able to be displayed as adverts or able to generate revenue (demonetized) – and that “New Twitter policy is freedom of speech, but not freedom of reach”, the research showed no appreciable change in the average levels of engagement or interaction with antisemitic Tweets before and after the takeover. There is no clear evidence that ‘de-boosting’ had any impact, as the platform’s algorithmic architecture seemingly continues to prioritize engagement over quality content. However, Twitter’s lack of algorithmic transparency means it is not easy to test this hypothesis at scale, preventing Musk from being held accountable for his promises.
A new regulatory paradigm
Twitter’s policy on hateful conduct claims to prohibit the incitement of harm against people based on race, ethnicity or religious affiliation; the harassment of individuals with reference to the Holocaust; and the use of slurs and racist epithets. However, our research surfaced a broad spectrum of antisemitic content on Twitter ranging from harmful conspiracy theories referring to Jewish control of finance, media and politics; to overt support for antisemitic comments made by public figures such as Kanye West; and the promotion of profoundly racist white supremacy.
Much of this falls in a grey area, where it doesn’t contravene legal thresholds of hate speech, but nonetheless likely violates platform terms of service. Twitter purports to take a variety of actions on violating material, including removing content, and down-ranking and de-amplifying Tweets, but there is little clarity around how such platform interventions are enforced.
Significantly, our research did find that after Musk’s takeover of the platform around 12% of the plausibly antisemitic messages we identified are now inaccessible on the platform, compared to roughly 6% versus pre-takeover. Whilst there are multiple possibilities for a Tweet not being retrievable, one cause would be the platform’s own content moderation practices. However, crucially our research suggests that these moderation efforts are not keeping up with the increased volume of hateful content on the platform, and accordingly are having a limited impact on the increasingly hateful environment on Twitter under Musk, a finding affirmed by recent research from the ADL showing the low removal rate of antisemitic Tweets flagged to the platform.
Beyond a sustained increase in hate speech, and evidence suggesting that other counter-measures to deboost harmful content are having limited impact, Twitter’s commitment to transparency also appears to be moving in the opposite direction, with the platform revoking the free API access that makes a substantial amount of this research possible. This poses the significant risk of limiting the impact of third party efforts to assess the scale of harmful content on the platform, or the impact of their moderation efforts. New regulations incoming from the European Union (in particular the Digital Services Act) will mandate much greater transparency from social media platforms on the actions being undertaken to prevent the proliferation of harmful material online.
The rising threat of antisemitism
These findings come amidst wider concerns around the proliferation of online antisemitism, with weaponised hate manifesting in rising real world violence targeting Jewish communities. In 2021 the ADL tracked the highest number of antisemitic incidents including harassment, vandalism and assaults in the US since they started recording in 1979. This is not just a US phenomenon; in the UK the Community Security Trust recorded a similar spike in this concerning activity, whilst the Interior Ministry of Germany also recorded record highs in antisemitic crimes following the Covid-19 pandemic.
These offline hate incidents should be viewed in the context of surges in online hate, with digital platforms facilitating the radicalisation of individuals towards antisemitic worldviews and the mass proliferation of narratives which seek to hold Jews responsible for the world’s ills. If we are to limit the spread of antisemitism and other forms of hate it is essential that policy solutions are found to its proliferation online.
This includes emerging regulatory regimes such as the EU’s newly introduced Digital Services Act, which seeks to enshrine a systemic approach to platform governance, addressing the platforms’ business models and their underpinning algorithmic architectures which promote hate. Our research suggests that Twitter is failing in their duties under this regime, amid calls from regulators for an increased commitment to meaningful transparency, sophisticated detection and proportionate enforcement by the platform.
The full report can be found on the ISD website, here.
 Although there are several explanations for Tweets being inaccessible on the platform, in the body of the report we explain how this can provide a potential measure of Twitter’s takedown efforts.