TikTok series: Methodology
This report is part of a series looking at hate speech and extremist content on TikTok, that includes a methodology overview, policy recommendations, an analysis of white supremacist content on the platform, as well as anti-migrant and anti-refugee content.
Introduction
These two reports are focused on white supremacy and anti-migrant & anti-refugee content on TikTok and how platform features are used to post, promote and disseminate it. This section covers the methodology used to conduct these investigations and produce these reports.
Glossary
Anti-migrant/Anti-refugee
Comparable to nativism, ISD defines this as a set of beliefs that oppose or are hostile towards migrants and refugees. This stance can be expressed in policies, rhetoric or actions aimed at rejecting, reducing or preventing migration in a jurisdiction or the dehumanization, harassment or intimidation of migrants or refugees. These beliefs are often used to express hatred in the form of xenophobia, racism, and bigotry, or endorse extremist forms of nationalism against those perceived to be outsiders and a threat to natives.
Hate
ISD understands hate to relate to beliefs or practices that attack, malign, delegitimize or exclude an entire class of people based on immutable characteristics, including their ethnicity, religion, gender, sexual orientation, or disability. Hate actors are understood to be individuals, groups or communities that actively and overtly engage in the above activity, as well as those who implicitly attack classes of people through, for example, the use of conspiracy theories and disinformation. Hateful activity is understood to be antithetical to pluralism and the universal application of human rights.
Extremism
ISD defines extremism as the advocacy of a system of belief that claims the superiority and dominance of one identity-based ‘in-group’ over all ‘out-groups.’ Extremists propagate a dehumanizing ‘othering’ mind-set and use any means necessary, including hate speech or acts of violence, to justify their radical or fanatical political, religious or cultural views.
Far right
ISD’s definition of far right is in line with right-wing extremism expert Cas Mudde who defines the term as groups and individuals that support or endorse political or social belief systems that feature at least three of the following five features: nationalism, racism, xenophobia, anti-democracy and strong state advocacy[1] . Mudde’s definition of “far right” includes both radical right-wing and extreme right-wing actors. Mudde states that both radical and extreme right-wing actors believe that “inequalities between people are natural and positive,” but have differing attitudes towards democracy. Radical right-wing actors are not against democracy in principle, while extreme right-wing actors reject democracy as a form of government.
White supremacy
ISD defines white supremacy as the belief in the superiority of white people over non-white people, and that white people should be politically and socially dominant over non-white people. This can extend to a belief in the need for violence against, or even the genocide of, non-white people.
Methods
This research was primarily conducted qualitatively as ISD does not have access to the TikTok API. This means the project was carried out manually.
For both reports, researchers generated a list of English language keywords associated with hateful terms that espoused and supported white supremacist and anti-immigration ideologies. Researchers searched TikTok for videos, accounts, hashtags or sounds featuring these keywords. Researchers used combinations of terms, changed the order of words in phrases, slogans or names and also used misspelled versions of the keywords to source content posted by users who knowingly tried to evade content moderation.
In some instances, when ISD identified an account that supported a specific ideology, researchers then used a ‘snowball sampling’ method to identify new accounts based on their follower networks and similarities in account characteristics, such as topical interests clearly stated in account bios. As a result of this snowballing, ISD was able to unearth additional content to analyse.
Data collection lasted roughly one month and, in total, researchers identified a total of 212 videos for later analysis.
In evaluating whether content is supportive of hateful or extremist ideologies, research projects must bear in mind that there are considerable variations in definitions in most legal jurisdictions around the world. This presents a clear challenge for the categorization and classification of such terms and research based on the same. ISD has developed its definitions (listed in the glossary above) through numerous projects and years of research.
In these reports, videos were selected for later analysis when they met these definitions. Some content fell outside ISD’s definition of these terms, some were clearly educational or otherwise critical of such content and some content appeared to be intended to discuss or criticize an extremist individual/group. TikTok’s Community Guidelines make exceptions for such content saying they may be “in the public interest.”
Some of the content identified and initially selected for analysis did not end up in the final sample. After consultation and reviews among analysts in the form of validation exercises, some videos were judged to be false positives and fell outside of ISD’s definitions.
ISD categorized content as supportive of hateful or extremist ideologies if the TikTok video’s added on-screen text or accompanying caption met ISD’s definitions
To analyze these videos in detail, ISD used a codebook first developed for the 2021 ISD Hatescape report which aids researchers in examining each video and account and coding for over 10 elements in each, such as the post interactions metrics, whether the account featured expressions of hatred or noting if each video expressed hatred [2].
Coding Content
ISD set out to examine white supremacist and anti-migrant content on TikTok. As explained above, ISD used keywords and identified relevant accounts to source video posts and create the sample for each report.
ISD coded each video for content that matched one or more of these categories. ISD also coded videos for other details such as the use of music, hashtags, captions, video creation functions or filters (duet, stitch, green screen and other effects) as well as interaction metrics (likes, comments and shares).
ISD also examined the profile of accounts that posted each video included in the sample for the presence or nature of any references to hatred or extremism in the username, nickname, image, biography or featured URL.
Violation of Community Guidelines
ISD assessed whether the content that was identified violated TikTok’s Community Guidelines and, if so, which guideline(s).
ISD identified seven TikTok Community Guidelines that relate to the promotion of hatred, extremism and mis/disinformation and included these as options in the codebook. The guidelines were:
- Violent Behaviors and Criminal Activities
This includes violent threats, incitement to violence or physical injury, or promotion of criminal activities that may harm people, animals, or property. It also includes shocking and graphic content.
- Hate Speech and Hateful Behaviors
This includes hateful behavior, hate speech, or promotion of hateful ideologies, such as racial supremacy, misogyny, anti-LGBTQIA+, and antisemitism, as well as content that attacks a person or group because of protected attributes. This also includes promoting any hateful ideologies or using hateful slurs.
- Misinformation
This includes false, misleading or inaccurate claims, medical misinformation, climate misinformation, conspiracy theories denying well-documented violent events and causing prejudice towards a group with a protected attribute.
- Violent and Hateful Organizations and Individuals
This includes the presence or promotion of violent and hateful organizations or individuals including violent extremists, violent political organizations, hateful organizations and criminal organizations.
- Sexual Exploitation and Gender-Based Violence
This includes sexual exploitation or gender-based violence including degrading or vulgar statements about a person’s intimate body parts, including genitalia, buttocks, and breast.
- Harassment and Bullying
This includes language or behavior that harasses, humiliates, threatens, or doxxes anyone. TikTok defines doxxing as activity that “involves publishing personal information about someone online with a malicious intent. We recognize intent can be subjective, so we use objective indicators to help us understand it, such as captions and hashtag.”
- Firearms and Dangerous Weapons
This guideline states it is prohibited to promote the trade of firearms or explosive weapons, or content showing or promoting them if they are not used in a safe or appropriate setting. Facilitating the trade of, or offering instructions on how to make, firearms or explosive weapons is also prohibited.
Whilst coding each video, researchers assessed whether they considered the video to violate any of these guidelines. This process happened alongside the assessment of whether content was supportive of hateful or extremist ideologies.
For example, if a video promoted a white nationalist utopia in Europe and did so by including promotional images of Nazi figureheads, it was marked as white supremacy by researchers and both “Hate Speech and Hateful Behaviors” and “Violent and Hateful Organizations and Individuals” were selected too.
End Notes
[1] https://www.wiley.com/en-us/The+Far+Right+Today-p-9781509536856
[2] https://www.isdglobal.org/wp-content/uploads/2021/08/HateScape_v5.pdf