Child Safety
Claims that AI represents a novel, uncontrollable existential threat to child safety - through widespread grooming, sextortion, deepfakes, and mass production of child sexual abuse material (CSAM) - have sparked sensational headlines and demands for sweeping AI bans. In reality, while targeted safeguards are essential for emerging misuse cases, the narrative of an AI-fueled “epidemic” of child exploitation is overblown.
Facts
Major AI Models Effectively Block CSAM Generation: Leading platforms like OpenAI, Google, and Meta have robust safety filters that prevent the vast majority of attempts to generate prohibited content, with misuse often limited to open-source or less-guarded models rather than frontier consumer tools.
Deepfake Incidents Are Isolated, Not Epidemic: High-profile cases of AI deepfakes used for school harassment or sextortion exist but are sporadic and localized, building on pre-AI bullying patterns rather than indicating a broad new wave of child victimization driven by generative technology.
No Evidence of AI Broadly Amplifying Contact Offending or Grooming: Studies and law enforcement data show that real-world child exploitation and grooming remain overwhelmingly perpetrated by humans via social media and messaging, with AI not yet linked to significant increases in contact-based abuses.
Rapid Industry and Policy Responses Are Containing Risks: Collaborative efforts like Thorn’s Safety by Design, new laws criminalizing AI CSAM in most U.S. states and countries, and proactive detection tools have quickly addressed vulnerabilities, preventing widespread proliferation.
Resources
NCMEC – 2024/2025 CyberTipline Overview and Data Annual reports highlighting total CyberTipline volume (millions of reports) versus the emerging but limited subset involving suspected AI-generated material.
Thorn – Safety by Design for Generative AI (2025 Progress) Details industry commitments and advancements in embedding safeguards to prevent CSAM generation in mainstream AI models.
Internet Watch Foundation (IWF) – AI CSAM Trends Report Tracks confirmed AI-generated imagery, noting increases but emphasizing detectability and containment through specialist tools.
Stanford HAI Policy Brief – Addressing AI-Generated CSAM (2025) Analyzes student misuse cases while highlighting gaps and effective response frameworks beyond panic-driven policies.
Freethink – A Tragedy, a Lawsuit, and the Birth of an AI Moral Panic (2025) Examination of how isolated AI-related incidents fuel exaggerated narratives, drawing parallels to historical tech panics (relevant to broader child safety claims).