UK Tech Companies and Child Safety Officials to Test AI's Capability to Generate Abuse Images

Tech firms and child safety organizations will receive permission to assess whether AI tools can generate child exploitation images under recently introduced British legislation.

Significant Increase in AI-Generated Illegal Material

The declaration came as revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the changes, the government will permit designated AI developers and child protection organizations to inspect AI models – the foundational systems for conversational AI and image generators – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Ultimately about stopping abuse before it happens," stated Kanishka Narayan, noting: "Specialists, under strict conditions, can now detect the risk in AI models promptly."

Addressing Regulatory Obstacles

The changes have been implemented because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot generate such images as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is designed to preventing that problem by enabling to stop the production of those images at source.

Legal Structure

The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a ban on possessing, producing or distributing AI models developed to create exploitative content.

Real-World Impact

This recently, the official toured the London headquarters of a children's helpline and heard a simulated conversation to advisors featuring a report of AI-based abuse. The interaction depicted a adolescent seeking help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.

"When I learn about children experiencing blackmail online, it is a cause of extreme frustration in me and rightful concern amongst parents," he said.

Concerning Data

A prominent online safety organization reported that instances of AI-generated abuse content – such as online pages that may include numerous images – had significantly increased so far this year.

Instances of the most severe material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.

  • Female children were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a crucial step to ensure AI products are secure before they are released," commented the chief executive of the online safety foundation.

"Artificial intelligence systems have made it so victims can be targeted repeatedly with just a simple actions, providing criminals the ability to create potentially limitless quantities of advanced, photorealistic exploitative content," she continued. "Content which further exploits survivors' suffering, and makes young people, especially female children, less safe on and off line."

Support Interaction Data

The children's helpline also published details of support interactions where AI has been mentioned. AI-related harms mentioned in the conversations include:

  • Using AI to rate body size, physique and looks
  • AI assistants discouraging young people from talking to safe guardians about harm
  • Facing harassment online with AI-generated content
  • Digital blackmail using AI-faked images

Between April and September this year, Childline delivered 367 support sessions where AI, conversational AI and related terms were discussed, four times as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with mental health and wellness, including using chatbots for support and AI therapeutic apps.

Nicholas Richardson
Nicholas Richardson

Elara is a passionate literary critic and avid reader, known for her engaging reviews and deep dives into contemporary fiction and non-fiction works.