Child Safety
AI child safety risks are real and need safeguards. Thankfully, federal law already forbids illegal content generated by an AI and models already implement strong protections.
Is AI CSAM illegal?
Yes, the FBI explicitly states that AI child sexual abuse material (CSAM) is illegal.1
Long-standing federal criminal statutes apply fully to AI-generated content—no new regulations are required to make AI-generated CSAM illegal.
The federal government is already prosecuting criminals who use AI to generate illicit materials.2
What has the government done to address deepfakes?
The TAKE IT DOWN Act prohibits sexual exploitation from AI deepfakes.3
Under this regulation, platforms are required to take down the content when requested.
AI is helping the government to better help identify victims and catch offenders.4
How do AI companies prevent CSAM generation?
The top AI companies—OpenAI,5 Anthropic,6 Google,7 Meta,8 and xAI9—explicitly prohibit the generation of AI CSAM in their policies.
Organizations such as Thorn have developed frameworks and solutions to help AI companies minimize CSAM risks.10
Many leading AI companies have committed to specific “Safety by Design” principles to protect child safety.11
Additional Resources
Public service announcement on AI-generated CSAM from the FBI.
This public service announcement from the FBI reiterates that AI-generated CSAM is strictly illegal.
“Artificial Intelligence and Combatting Online Child Sexual Exploitation and Abuse” from the Department of Homeland Security.
This flyer from the DHS highlights the illegality of AI-generated CSAM and the opportunities for law enforcement to use AI to help identify victims and catch offenders.
TAKE IT DOWN Act from the US Congress.
This law prohibits exploitation from AI-generated deepfakes and requires platforms to take the content down.
“Safety by Design: One year of progress protecting children in the age of AI” from Thorn.
This article discusses the work by Thorn to help AI companies operate with “Safety by Design” principles that protect children.
OpenAI safety policy: Explicitly prohibits AI CSAM.
Meta safety policy: Explicitly prohibits AI CSAM.
Google safety policy: Explicitly prohibits AI CSAM.
Anthropic safety policy: Explicitly prohibits AI CSAM.
xAI safety policy: Explicitly prohibits AI CSAM.