Skip to content
Implementing Robust Child Safety Measures in AI Technology: An Industry-wide Initiative

Implementing Robust Child Safety Measures in AI Technology: An Industry-wide Initiative

OpenAI, in collaboration with industry giants like Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, and Stability AI, is undertaking a major initiative to prioritize child safety during the creation, release, and ongoing management of generative AI technologies. This campaign centers on the Safety by Design principles and comes as a response to the potential risk generative AI could pose to children.

This forward-thinking project is spearheaded by Thorn, a nonprofit organization dedicated to protecting children from sexual exploitation and abuse, and All Tech Is Human, an organization devoted to dealing with the multifaceted issues presented at the intersection of technology and society. OpenAI and its partners in this initiative are resolute in ensuring that child safety is not an afterthought, but a core value at every stage in the evolution of AI technologies.

OpenAI has already made great strides in adopting a comprehensive Safety by Design mentality. Measures taken include the establishment of age restrictions for ChatGPT, proactive efforts to minimize the harms from generated content, and open dialogues with key stakeholders in child protection and enhancement of reporting mechanisms, such as the National Center for Missing and Exploited Children (NCMEC), Tech Coalition, and other industry and government entities.

The Safety by Design approach involves three key commitments:

  1. Develop: Create and refine generative AI models with a focus on preemptively addressing child safety concerns. Responsibilities include accurately sourcing training datasets, actively removing and reporting any child sexual abuse or exploitation material (CSAM/CSEM) found in the data, and incorporating continuous feedback and stress-testing processes. The team also focuses on developing solutions that counter intentional misuse.
  2. Deploy: Generative AI models are released and distributed only after thorough child safety checks and evaluations, with protective measures integrated throughout the process. Developers are encouraged to take ownership of safety provisions.
  3. Maintain: Ongoing work to understand and respond to child safety risks. This includes a commitment to removing new AI-generated CSAM by malicious actors and an investment in research and future technological solutions.

This manifest pledge outlines an essential step towards curbing the potential misuse of AI, especially in creating or disseminating child sexual abuse material. The collaborating entities are also pledging to release annual progress updates.

The tools developed by OpenAI, such as ChatGPT and DALL-E, have been built with considerable care and thought, prioritizing safety and ethical use. OpenAI remains committed to working with Thorn, All Tech is Human and the broader tech community to uphold the Safety by Design principles, with the aim of continually mitigating potential harms to children.

This collective action underlines a unified approach to child safety and displays a shared commitment to ethical innovation and the securing of the well-being of the most vulnerable members of society.

Disclaimer: The above article was written with the assistance of AI. The original sources can be found on OpenAI.