Child Safety

Claims that AI represents a novel, uncontrollable existential threat to child safety - through widespread grooming, sextortion, deepfakes, and mass production of child sexual abuse material (CSAM) - have sparked sensational headlines and demands for sweeping AI bans. In reality, while targeted safeguards are essential for emerging misuse cases, the narrative of an AI-fueled “epidemic” of child exploitation is overblown.


Facts

  • Major AI Models Effectively Block CSAM Generation: Leading platforms like OpenAI, Google, and Meta have robust safety filters that prevent the vast majority of attempts to generate prohibited content, with misuse often limited to open-source or less-guarded models rather than frontier consumer tools.

  • Deepfake Incidents Are Isolated, Not Epidemic: High-profile cases of AI deepfakes used for school harassment or sextortion exist but are sporadic and localized, building on pre-AI bullying patterns rather than indicating a broad new wave of child victimization driven by generative technology.

  • No Evidence of AI Broadly Amplifying Contact Offending or Grooming: Studies and law enforcement data show that real-world child exploitation and grooming remain overwhelmingly perpetrated by humans via social media and messaging, with AI not yet linked to significant increases in contact-based abuses.

  • Rapid Industry and Policy Responses Are Containing Risks: Collaborative efforts like Thorn’s Safety by Design, new laws criminalizing AI CSAM in most U.S. states and countries, and proactive detection tools have quickly addressed vulnerabilities, preventing widespread proliferation.


Resources