Thursday, October 23, 2025

How Tech Companies Are Safeguarding Children in the Era of Generative AI

Share

The Rapid Evolution of AI

AI technology is advancing at an unprecedented pace, growing and evolving far more quickly than the regulations intended to govern its use. This rapid evolution poses critical challenges, especially regarding ethical frameworks and safety measures. With this landscape shifting, tech companies find themselves at a crossroads: they must take proactive steps to protect their users—especially children—who are increasingly vulnerable to exploitation in this digital age.

Alarming Statistics on Child Exploitation

One of the most glaring issues emerging from the rise of generative AI is its impact on children. Dr. Rebecca Portnoff, head of data science at Thorn, highlighted several shocking statistics at a recent panel during the AI Conference in San Francisco:

  • Explosive Growth of Child Sexual Abuse Material (CSAM): In the past two decades, there has been a staggering 13,400% increase in CSAM.
  • Vulnerability of Minors: Approximately 40% of children have reported being approached by strangers soliciting nude images.
  • High Frequency of Sexual Extortion: The National Center for Missing and Exploited Children (NCMEC) handles an average of 812 reports of sexual extortion every week.

These figures paint a distressing picture of a landscape where children are at greater risk than ever before.

The New Role of Law Enforcement

In today’s digital world, law enforcement faces the herculean task of navigating vast amounts of data to identify and assist exploited children. The advent of AI has only complicated this effort, as offenders can now use sophisticated models to create CSAM with far less oversight. Moreover, these criminals can leverage this material to blackmail or bully young victims.

Dr. Portnoff pointed out that 11% of recent sexual extortion reports to NCMEC involved threats based on fabricated sexual imagery. This alarming trend underscores an urgent need for responsive and robust policies and tools to fortify child safety.

Thorn’s Digital Safety Net

In the face of these challenges, Thorn is stepping up with initiatives designed to create a "digital safety net" for children. Their approach, termed “Safety By Design,” is a framework to ensure that child safety is prioritized throughout the AI development process. Many notable platforms, including Slack, Patreon, and Vimeo, have already adopted these guidelines.

The Safety By Design framework focuses on three comprehensive principles:

  1. Develop: Proactively build and train generative AI models that directly address existing and potential child safety risks.
  2. Deploy: Ensure that any generative AI models are released only after thorough safety evaluations, maintaining protections throughout the deployment phase.
  3. Maintain: Continuously monitor and respond to evolving child safety risks to ensure that both models and platforms remain secure.

This framework aims not just to react to threats but to preemptively safeguard against them.

The Need for Tech Companies to Act

As offenders become increasingly sophisticated—using AI to generate CSAM and even modifying existing platforms for nefarious purposes—tech companies must step up to the plate. This responsibility includes implementing robust safeguards such as removing harmful tools from search results and adhering to government and law enforcement guidelines during development.

Dr. Portnoff emphasized, “Tech really moves fast. If your tech stack outpaces your safety interventions, your efforts are not going to be that effective.” This statement captures the crux of the dilemma facing tech developers today: they must integrate safety measures into their development cycles, ensuring that harmful content generation is kept at bay.

Engaging Parents in the Fight

While the panel at the AI Conference focused primarily on developers, Thorn’s guidelines offer valuable insights for parents as well. Thorn provides practical tips and conversation starters designed to help parents engage with their children about digital safety. These resources can empower families to navigate the complexities of the digital world more safely.

For more information on how parents can protect their children, visit Thorn’s dedicated resource page here.

The Ongoing Conversation at the AI Conference

The dialogue at the AI Conference reveals that the urgency to protect our children in the age of AI cannot be overstated. As this event continues, it fosters an essential conversation about how the tech community can collaborate to safeguard the most vulnerable among us. The need for responsible innovation and proactive measures has never been clearer, especially concerning the developmental impacts of AI technologies on future generations.

Stay informed about the ongoing discussions and developments by following the conference coverage and engaging with organizations committed to making a difference in child safety.

Read more

Related updates