funded by


We aim to better understand the online transmission of overt and covert racist content within viral social media trends.

Specifically, this project will examine individual users’ participation in the highly popular short-video platform TikTok, including the active and deliberate creation, sharing, or redistribution of content videos based on popular video or audio phenomena.

Through the use of cutting-edge social listening software, we will examine audiovisual media artifacts, viral participation, and algorithmic recommendations on TikTok to answer the following research questions:

How can we identify and classify audiovisual racist elements in viral TikTok videos?

What are users’ motivations to participate in viral creation and redistribution of such content?

How can we educate about and prevent (un-)intentional creation and distribution of racist TikTok content?


Daniel Klug

Carnegie Mellon University

Software & Societal Systems Dep. (S3D)

Ming-Te Wang

University of Pittsburgh

Learning Research & Development Center

Christina L. Scanlon

University of Pittsburgh

Learning Research & Development Center

Alice Huguet

RAND Corporation

Karen Sowon

Carnegie Mellon University

Brady Shore

PennWest University


Extremist messages of hate are unfortunately present in everyday digital interaction on social media platforms. While hate groups have indeed found ways to actively spread extremist political and cultural ideologies online, much of hateful content on social media is disguised. Instead of overt messaging, extremist views are catching users’ attention through more subtle, seemingly innocuous elements.

Upon first glance, these materials do not appear to be intentionally pushing users toward extremes, but online content carries multiple textual, audio, and video cues that may tap into and feed unconscious biases.

With recent whistleblower reports identifying the potency and intention of social media algorithms, the concern is that viral trends in which race, culture, or political ideologies may be an entry point into a meandering rabbit hole that leads toward more and more extremist views.

Considering these growing concerns, it is critical that we determine ways to help social media users to identify and decode hateful elements within trending content. These processes are particularly difficult in the case of audiovisual content on social media, as hateful elements can be part of merely the visual, textual, or auditive level as one part of a video.

As a result, younger social media users are largely unaware that their actions are spreading racist sentiments through participating in questionable trends. Adolescents in particular are avid consumers and creators of social media content. In fact, social media could be considered a critical developmental context due to its influences on youth’s perception of reality and daily interpersonal relationships.

Because of their cognitive skills related to perception, awareness, and empathy are still developing, teens either do not understand the connection of racially insensitive content to extremism, or they frequently dismiss it as an inevitable part of viral participation.

As such, educational interventions are needed to create awareness, mitigate harm, and prevent radicalization especially among young digital native people.


The main goal of this interdisciplinary project is to better understand the user-related mechanism of why and how hateful content is able to linger as a latent part of viral social media content.

We expect our results to answer key questions about the spread of racist visual, textual, auditive, and foremost audiovisual messages in viral TikTok trends from the perspective of user behavior and motivation rather than through data-driven models.

Our review of academic literature with a content analysis of TikTok phenomena (AIM 1) will provide a unique and universal list of attributes that indicate racist content in social media videos and hence provide an analytical set to research other online platforms.

Our quantitative and qualitative data exploration (AIMS 2 & 3) will

(a) generate data sets of TikTok video trends that provide more detailed insights into racist viral content and

(b) inform strategies aimed at reducing the spread of racist content on social media.

We will widely disseminate these results to youth-oriented contexts (e.g., schools, after-school programming, community centers) so as to raise awareness of how racial biases can operate in in-person and online settings.

Results from these scientific inquiries will culminate in a pilot curriculum (AIM 4) that helps youth become more responsible members of their online social community by bolstering their ability to recognize, reduce, and report racist social media content.

Define Types and Forms of Racism in Digital Audiovisual Media Artifacts

Quantitative Trend Analysis of Racism in Viral TikTok Phenomena

Qualitative User Studies of Sharing Behavior on TikTok in the Context of Algorithmic Video Recommendation

Design and Pilot Educational Interventions

(c) MINT Lab, 2022 | Stills for header image were taken from public TikTok channels.