TikTok series: Policy recommendations 

This report is part of a series looking at hate speech and extremist content on TikTok, that includes a methodology overview, policy recommendations, an analysis of white supremacist content on the platform, as well as anti-migrant and anti-refugee content. 


Across the analyses conducted by ISD, various mechanisms to improve platform safety in the context of hate speech and extremism were identified. Many of the issues discovered through the analysis were content agnostic and pointed to significant improvements that could be made to enforcement infrastructure, systems, and product policies. Others showed specific gaps in knowledge around content issue areas (severely hampering policy enforcement), or a general lack of clarity and detail in TikTok’s community guidelines.   

Content Detection and Policy Enforcement  

One of the fundamental issues ISD analysts identified that underpins many of ISD’s recommendations was the inability of TikTok to respond to the more nuanced manifestations of hate and extremism and the tactics and narratives employed by accounts generating violative content, but also to the most overt and clear policy violations.

There were certain areas where a lack of nuanced understanding by moderators (and/or within the policies themselves) played a role in moderation failing, an example being references to acronyms associated with extremism such as “HH” representing “Heil Hitler”. However, there were also many instances where ISD found systemic moderation failures for overtly violative content that, even more alarmingly, often seemed to include co-occurring policy violations. In ISD’s assessment of white supremacist content for example, ISD assessed 30% of the total sample included two or more policy violations. 

These findings show the need for significantly enhanced trust and safety resourcing, a deepened contextual understanding of hateful and extremist narratives amongst moderators, and robust external collaboration with researchers and civil society organizations as well as adjustments to product features that may be used to amplify this type of content and undermine TikTok’s own safety efforts.  

Internal Trust and Safety Resourcing 

While TikTok has not made any reported cuts to its Trust and Safety teams in the past two years (unlike other platforms such as Meta and X), ISD’s findings show that further resourcing is required for TikTok to be able to enforce its policies effectively and consistently, and adequately respond to the scale and complexity of hate speech and extremism.  

Platform policy and enforcement teams within Trust & Safety lay the foundation for content identification, classification and moderation. Specifically, it is the responsibility of the product policy teams to consistently identify the vast amount of new violative content themes, dog-whistles and coded language. Without those learnings, moderators, who are by no means issue experts, will not receive the necessary guidance to control this content at scale. The rapid adaptation of hate groups and actors to new cultural and political contexts means that TikTok’s product policies must be equally dynamic and informed. Further investment must be made in these teams to ensure they have the appropriate resources to track and identify the ever-increasing number of covert and overt threats.  

Moderator Training and Resources 

While there are clearly gaps in moderating more nuanced forms of hate and extremist content, the failure to also effectively moderate clear and overt policy violations indicate certain internal failures. These failures likely stem from moderator policy guidance lacking in comprehensiveness when providing specific examples of the types of hate speech or violent extremist content that would be in violation of TikTok’s policies. We propose significant investment in updating moderator guidance and trainings to include a broad spectrum of the types of hate speech likely to target specific groups as well as extremist individuals, ideologies, and events that ISD found were glorified and promoted on the platform. 

While increasing the number of moderators would improve content moderation across all violative content, having a realistic time frame in which moderators are expected to decide would arguably be more impactful. This is particularly true when dealing with issues such as hate and extremism, where less overt content spreads. Equally important is ensuring moderators have the resources and specialized training to recognize and understand the more nuanced ways hate speech and extremist ideologies manifest online. For instance, moderators tasked with identifying anti-migrant content must be familiar with both overt slurs and more subtle forms of derogatory language that might not be immediately recognizable without a deeper understanding of migration issues and current events. ISD recommends utilizing teams of specialized moderators for these issue sets, allowing potential policy violations to be re-routed to dedicated issue-based moderation queues that have the context to enforce less overtly violative content. 

Automated Content Moderation 

While enhancing the contextual understanding of human content moderators is crucial, TikTok must also improve the sophistication of its automated moderation tools, to allow them to detect a broader range of violative words and phrases, and the context in which they are used. This requires training AI systems on datasets that include examples of hate speech and extremism in various guises – both violative speech and counter speech – accompanied by human oversight to correct and refine AI judgments. For these models to remain effective they must be re-trained at a regular cadence given the speed at which the hate and extremism landscape evolves.  

Combatting Account Recidivism 

Tackling account recidivism—where banned users return to a platform under new accounts—poses significant challenges, especially in the context of hate speech and extremism. The reports on anti-migrant speech and white supremacist content highlight the persistence of such ideologies, underscoring the need for effective strategies to prevent banned individuals from circumventing platform rules. A comprehensive approach to mitigate account recidivism involves technological solutions, user community engagement, and policy enforcement enhancements. 

Technological measures are the first line of defense in identifying and preventing recidivist accounts. TikTok can leverage machine learning algorithms to analyze behavioral patterns and digital footprints that recidivist users often leave behind. Implementing stricter verification processes during account creation can also help deter recidivism. Empowering and educating the TikTok user community to identify and report potential recidivist accounts is another layer of defense. Additionally, TikTok could establish a trusted reporter program, involving reputable researchers and civil society groups, to assist in identifying recidivist accounts perpetuating hateful and extremist ideologies.   

Product Features and Functionality 

Overhaul Auto-Suggest Safety Strategy 

The most significant content safety product risk TikTok faces is its auto-suggest function. Currently, this feature actively undermines TikTok’s entire safety strategy by proactively suggesting workarounds for users to access violative content, or identify communities that perpetuate hateful or extremist narratives.  

While it is unclear how the management of TikTok’s auto-suggest feature functions internally, its current functionality seems completely disconnected from TikTok’s video content moderation strategies and safety mitigation measures. ISD recommends a significant internal audit and corresponding remediation plan for the harms to the TikTok content ecosystem created by the auto-suggest feature.  

Enhance Hashtag and Search Blocking Functionality 

ISD analysts discovered that hashtags and search terms were among the most significant contributors to the accessibility of violative content for TikTok users. Although TikTok does implement measures to block certain violative hashtags and search terms, the current list is far from comprehensive and can be easily circumvented by simple misspellings. The use of hashtags, in particular, has a compounding effect that further exacerbates the risks associated with an inadequate safety strategy. When a user identifies a single hashtag employed to evade moderation, they are immediately exposed to a multitude of other evasive hashtags, granting them access to a wider network of accounts disseminating violative content. 

Given these findings, enhancing the blocking and moderation of hashtags and search terms will be crucial in combating the spread of hate and extremism on TikTok. The widespread use of hashtags to spread anti-migrant and white supremacist content shows the need for a stronger, better-funded approach to moderating how content is discovered on the platform. As a foundational and critical step, TikTok must regularly update its list of blocked terms, incorporating a significantly increased number of variations and misspellings, and apply these filters not just to search results but also to hashtag suggestions. 

TikTok’s strategy should include implementing a dynamic system that identifies and assesses the context in which hashtags are used, and then prioritizes the moderation review of content tagged with hashtags known to be co-opted by hate groups. This would likely involve automated tooling trained to detect subtle shifts in the usage patterns of specific hashtags.  

Also at issue are the ever-evolving array of coded words, numbers and phrases used by individuals and groups that espouse extremist ideologies. White supremacist content often hides behind historical or cultural hashtags, using them as dog whistles to attract like-minded individuals while evading moderation. For example, hashtags referencing specific events or figures known primarily within white supremacist circles (#14words, #pepe) require a nuanced understanding of extremist symbols and language. ISD recommends TikTok work closely with external experts to maintain an up-to-date database of such keywords. 

Rigorously Enforce Platform Policies In Comment Sections 

Deploying improved detection functionality and removal of comments that include violent or hateful rhetoric is critical for TikTok to be able to safeguard users from harm and disincentivize the creation of hateful and extremist content. During the analysis, content within comments that blatantly violated TikTok’s community guidelines appeared to proliferate unchecked. These included comments that advocated for the shooting, killing, or harming of refugees or migrants. Given these findings, ISD recommends significantly improving the capabilities of automated comment moderation tools and potentially imposing harsher penalties on accounts that violate the platforms community guidelines within comments.  

Transparency and User Communications 

Implement More Clarity in Community Guidelines  

Expanding and clarifying community guidelines to specifically address different sub-categories of hate speech and extremism is a vital step that TikTok must take to foster a safe online environment. The nuanced and evolving nature of online hate speech, as demonstrated through the pervasive spread of anti-migrant narratives and white supremacist content, underscores the need for comprehensive guidelines that go beyond general, high-level definitions to tackle these complex issues. 

Analysis of anti-migrant content on TikTok showcases how videos can subtly incite xenophobia and hostility under the pretense of discussing immigration policy or national security. Videos that label migrants and refugees as “invaders” or spread unfounded claims about their intentions contribute to a climate of fear and hostility. Guidelines should be enhanced to specifically address how discussions on migration must not cross into dehumanization or vilification of individuals based on their migrant status. Providing detailed examples, such as the wrongful portrayal of migrants as inherently criminal or dangerous, would help users and moderators better identify violative content. 

Expanding Researcher Access  

While TikTok has made meaningful progress regarding data access since ISD’s prior “Hatescape” report (having now released their Research API), there is still significant room for improvement. Currently, TikTok’s Research API is available only to non-profit academic institutions, significantly hampering civil society’s ability to conduct research on the platform. Additionally, the access requirements for academic institutions still have significant restrictions regarding how a project would receive approval from TikTok, allowing the company to potentially decline project applications based on the amount of reputational risk they may pose. ISD recommends TikTok expand access to its Research API to civil society organizations, while also expanding the geographies in which researchers can be based beyond the current scope of Europe and the US. This increased access becomes more critical as TikTok limits the functionality of other tools used by researchers such as its Creative Centre.