Recommending Hate: How TikTok’s Search Engine Algorithms Reproduce Societal Bias

Published: 7 February 2025
Authors: Paula-Charlotte Matlach, Allison Castillo, Charlotte Drath & Eva F Hevesi
ISD analysed TikTok’s search engine to examine its moderation processes across English, French, German and Hungarian. Our research found significant evidence of algorithmic bias: across all four languages, search results consistently demonstrated harmful associations that objectify and degrade presumed members of marginalised groups. These findings suggest that in an effort to drive user engagement and increase revenue, TikTok’s search and recommender algorithms reproduce and potentially amplify societal biases. The analysis concludes with proposals for both lawmakers and the company to improve safeguards and mitigate the risk of algorithms increasing and perpetuating harm.
This briefing is part of a series examining online gender-based violence (OGBV) on TikTok in English, German, French and Hungarian. It is part of the project Monitoring Online Gender Based Violence Around the European Parliament Election 2024, funded by the German Federal Foreign Office.