[Product Case Study] How AI Shapes Content Safety at TikTok?

Valerie Alexis Chang
5 min readMay 29, 2024

--

The digital world thrives on connection, but ensuring a safe online space for users to express themselves freely is an ongoing challenge.

As someone passionate about online safety, I understand the importance of creating a safe and positive space for all users, especially minors, on platforms like TikTok. This case study explores how TikTok can leverage AI to empower users while keeping them safe from harmful content.

What is TikTok?

It’s a social media platform where users create, share, and discover short videos (15 seconds to 3 minutes).

Our Mission:
At TikTok, user safety is key. I assume we will prioritize:

  • Safe Content Consumption: Ensuring users encounter appropriate content.
  • Positive Online Environment: Fostering a respectful and supportive community.

Understanding Our Stakeholder Segments:

  • Content Creators: Users who share their creativity through videos.
  • Content Viewers: Users who consume and engage with video content.
  • Third-Party Providers: Governments, safety organizations, and law enforcement who collaborate on online safety initiatives.

Prioritization of Stakeholder Segments

To prioritize, I will use the following factors:

  1. Impact: Who feels the most pain?
  2. Market Size: How many users face this problem?
  3. Urgency: How urgent is it to solve?

Selected Segment: Content Viewers

  1. Content Viewers (H, H, H)
  2. Content Creators: (M,M, M)
  3. Third Party Providers: (M,L,M)

Sub-Segmentation of Content Viewers

  1. Minors (13–16 years old)
  2. Adults (18+ years old) who like general topics (travels, food, fitness)
  3. Sensitive Topics Enthusiasts (politics, mental health, social issues)

Prioritization of Sub-Segments (Impact, Market Size, Urgency)

  1. Minors (H, M-H, H)
  2. Adults (M, H, M)
  3. Sensitive Topics Enthusiasts (H, L, H)

Selected Sub-segment: Minors

Why?

  • Large and vulnerable: Minors (13–19 years old) make up a significant portion of our users (32%). Their young age and developing maturity make them more susceptible to the negative effects of harmful content.
  • High impact: Exposure to inappropriate content can have a lasting impact on young users.
  • Urgent need: Protecting minors from harm is our top priority.

Adults
While adults are our biggest user group, they have a lower impact and urgency due to their emotional maturity and ability to filter content.

Sensitive Topics Enthusiasts
While a smaller group, users interested in sensitive topics face both high impact and urgency due to the potential dangers of the content they engage with.

Pain Points of Minors (from the Parents’ Perspective)

1.Exposure to Inappropriate Content:
😥 Worry that my child may encounter harmful content without knowing it will have a negative impact on them (alcohol, child exploitation materials).

2. Peer Pressure:
😥 Worry my child might be peer-pressured into following risky trends (e.g., glamorizing suicides).

3. Online Scams:
😥 Fear that my child may be privately interacting with scammers online and could be at risk of being deceived.

4. Cyberbullying:
😥 What if my child gets bullied online without me knowing?

Prioritizing These Pain Points

  1. Exposure to Inappropriate Content: (M-H, H, M-H)
  2. Peer Pressure into Risky Behavior: (H, L, H)
  3. Online Scams: (H, M, M)
  4. Cyberbullying: (H, L, M)

Selected painpoints: Exposure to Inappropriate Content

Why?

  • Affects the largest demographic (minors), crucial for for maintaining TikTok’s reputation as a safe platform & content severity ranges from mild to severe, making it urgent to protect young users.
  • Peer pressure into risky behavior is also highly urgent due to its potential for immediate physical harm, but affects a smaller group of minors.
  • Online scams, while impactful due to financial and emotional distress, target a medium-sized market and are less urgent as they are not life-threatening.
  • Cyberbullying has high emotional and psychological consequences but is less urgent as it does not pose an immediate risk to life.

Brainstorming Solutions: Exposure to Inappropriate Content

AI Solutions for a Safe Platform:

  1. Contextual AI Analysis:
  • Goes beyond object recognition: Identifies a party scene with bottles.
  • Analyzes the bigger picture (teenagers) to suggest underage drinking.
  • Flags potentially inappropriate content, even if cleverly disguised.

2. Early Detection of Risky Behavior:

  • Monitors user behavior for unusual patterns.
  • Example: A 14-year-old typically watches funny videos but suddenly engages with risky challenges.
  • Identifies deviations from normal behavior and alerts the safety team.

3. AI Guardian
Personalizes safety measures based on age and behavior:

  • Age-Based Customization: Restricts access to inappropriate content for younger users. A 13-year-old can’t see videos with mild violence, but a 17-year-old can.
  • Content-Type Adaptation: Ensures educational videos are accurate and suggests related content. Adds warnings and resources for sensitive topics like mental health issues.
  • Educational Interventions: If a minor watches dangerous challenges, the AI shows warnings about the risks and links to safety resources.

Prioritising Solutions

To prioritize, I will use the following factors:

  • Impact: Who feels the most pain?
  • Effort: How expensive it is to build it?

1. Contextual AI Analysis (H, M)

2. Early Detection of Risky Behavior (M, M)

3. AI Guardian (H, H)

Selected Solution: Contextual AI Analysis

Why?

  1. Contextual AI Analysis (High Impact, Medium Effort):
    This solution aligns with TikTok’s mission of fostering creativity. By analyzing context beyond objects (e.g., party scene with teenagers and bottles), it can effectively identify potentially inappropriate content, even if disguised cleverly. This broad reach makes it a high-impact solution.
  2. Early Detection of Risky Behavior (Medium Impact, Medium Effort): This AI monitors user behavior for unusual patterns. For example, a sudden shift from funny skits to risky challenges might indicate a vulnerability to peer pressure. Early detection allows for intervention before escalation. I see this as an additional layer of preventive measure to complement contextual AI analysis solution.
  3. AI Guardian (High Impact, High Effort): This long-term vision personalizes safety measures based on age and behavior. It offers age-appropriate content access, ensures educational content accuracy, and provides resources for sensitive topics. While impactful, it requires significant development effort. This remains a long-term vision due to its complexity.

Risks of this solution: Contextual AI Analysis

  1. False Positive: AI might inaccurately identify these false events, flagging content as inappropriate leading to users frustrations

👉 To mitigate this, we can implement a human review process for flagged content and continuously refine the AI algorithm based on user feedback.

2. Over moderation: Excessive filtering could stifle creativity and cause user’s frustration
👉 To mitigate this, we can have a framework to set threshold for content flagging.

[BONUS] AI Safety Capabilities Infographics

--

--

Valerie Alexis Chang

Hi! 👋 I’m a product geek at Rakuten Viki. I love breaking down complex techy stuff to make it easy for everyone to understand. 🚀