British Tech Firms and Child Safety Agencies to Test AI's Capability to Generate Abuse Images

Tech firms and child protection organizations will receive authority to evaluate whether AI systems can produce child exploitation images under new UK legislation.

Significant Increase in AI-Generated Illegal Content

The declaration coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the government will allow approved AI companies and child protection organizations to examine AI models – the underlying technology for chatbots and image generators – and ensure they have sufficient protective measures to prevent them from producing images of child sexual abuse.

"Fundamentally about stopping abuse before it happens," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the danger in AI systems early."

Tackling Legal Obstacles

The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was published online before dealing with it.

This law is designed to averting that issue by enabling to halt the creation of those images at their origin.

Legal Framework

The amendments are being introduced by the government as modifications to the crime and policing bill, which is also implementing a prohibition on owning, creating or distributing AI systems developed to create exploitative content.

Real-World Impact

This recently, the official visited the London base of a children's helpline and heard a simulated call to advisors featuring a account of AI-based abuse. The interaction depicted a adolescent requesting help after being blackmailed using a explicit deepfake of themselves, created using AI.

"When I learn about children facing blackmail online, it is a source of intense anger in me and rightful concern amongst families," he stated.

Concerning Data

A leading internet monitoring organization stated that cases of AI-generated exploitation material – such as webpages that may include numerous files – had more than doubled so far this year.

Cases of the most severe content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were predominantly victimized, accounting for 94% of illegal AI depictions in 2025
  • Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a crucial step to guarantee AI products are safe before they are launched," stated the chief executive of the online safety foundation.

"Artificial intelligence systems have made it so survivors can be targeted all over again with just a few clicks, giving offenders the ability to make possibly endless amounts of advanced, photorealistic child sexual abuse material," she continued. "Material which additionally exploits victims' suffering, and makes children, especially female children, less safe both online and offline."

Counseling Interaction Information

The children's helpline also published details of counselling interactions where AI has been referenced. AI-related harms discussed in the conversations include:

  • Using AI to evaluate weight, physique and appearance
  • AI assistants dissuading young people from consulting trusted adults about harm
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-manipulated pictures

Between April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and associated terms were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 sessions were connected with mental health and wellness, including using chatbots for support and AI therapy apps.

Adam Harper
Adam Harper

A tech enthusiast and software developer with a passion for AI and emerging technologies, sharing practical insights and reviews.