Child Safety Analysis

Child Safety

AI child safety fears—grooming, sextortion & deepfakes—drive headlines & calls for moratoriums. Risks are real & need safeguards, but the "epidemic" narrative is exaggerated.

Facts

  • Major AI Models Effectively Block CSAM Generation: Leading platforms like OpenAI, Google, and Meta have robust safety filters that prevent the vast majority of attempts to generate prohibited content, with continuous improvements in detection and prevention.
  • Deepfake Incidents Are Isolated, Not Epidemic: High-profile cases of AI deepfakes used for school harassment or sextortion exist but are sporadic and localized, building on pre-AI bullying patterns rather than representing a new epidemic.
  • No Evidence of AI Broadly Amplifying Contact Offending or Grooming: Studies and law enforcement data show that real-world child exploitation and grooming remain overwhelmingly perpetrated by humans via traditional means, not AI-enabled new methods.
  • Rapid Industry and Policy Responses Are Containing Risk: Collaborative efforts like Thorn's Safety by Design, new laws criminalizing AI CSAM in most U.S. states and countries, and proactive detection tools are effectively containing emerging risks.

Resources