Abstract
For content creators, whose careers rely on digital visibility, hate and harassment are an occupational hazard that impacts creators’ mental health, and in cases where online harassment spills offline, physical safety. We interviewed 19 YouTube creators on their experiences with and strategies used to combat hate and harassment, focusing on platform-provided tools, to understand their needs and identify areas for improvement. While participants did report offensive content, they did not find the platform’s reporting feature useful and felt they could not rely on it for remediation or support. Instead, they primarily used platform-provided moderation tools, social media hygiene practices, and other creators’ influence to manage the abuse they receive. Additionally, we found that harassment extended beyond the overt abuse perpetrated by bad actors and included seemingly innocuous interactions from their audience. Creators thus had to factor both external threats and intracom-munity dynamics into their threat model. The persistence of these issues across years of research suggests that, absent changes in incentives or policy reforms, it is unlikely that platform improvements alone will meet user safety needs. We discuss how external factors contribute to these challenges or constrain solutions.

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright (c) 2026 Journal of Online Trust and Safety
