British Tech Companies and Child Safety Officials to Examine AI's Ability to Create Exploitation Images
Technology companies and child protection agencies will be granted permission to evaluate whether artificial intelligence tools can produce child exploitation images under recently introduced British laws.
Substantial Rise in AI-Generated Harmful Content
The declaration came as findings from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the government will permit designated AI companies and child safety organizations to inspect AI systems – the foundational technology for conversational AI and image generators – and ensure they have sufficient protective measures to prevent them from creating images of child exploitation.
"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, adding: "Experts, under strict conditions, can now detect the danger in AI models early."
Tackling Regulatory Obstacles
The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is aimed at averting that issue by enabling to stop the production of those images at source.
Legal Framework
The changes are being added by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on owning, producing or sharing AI systems designed to generate child sexual abuse material.
Real-World Impact
This recently, the minister visited the London base of a children's helpline and heard a mock-up call to advisors involving a report of AI-based exploitation. The call portrayed a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.
"When I learn about young people experiencing blackmail online, it is a source of intense frustration in me and justified concern amongst families," he stated.
Alarming Statistics
A prominent online safety organization stated that cases of AI-generated exploitation material – such as online pages that may contain numerous images – had more than doubled so far this year.
Instances of category A material – the gravest form of abuse – increased from 2,621 visual files to 3,086.
- Girls were predominantly victimized, making up 94% of prohibited AI depictions in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a vital step to ensure AI products are safe before they are released," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, providing offenders the ability to create possibly endless quantities of advanced, lifelike child sexual abuse material," she added. "Material which further commodifies survivors' trauma, and renders children, especially girls, less safe on and off line."
Support Interaction Data
Childline also published details of support sessions where AI has been referenced. AI-related risks mentioned in the sessions comprise:
- Employing AI to evaluate body size, physique and looks
- AI assistants discouraging children from talking to trusted adults about harm
- Being bullied online with AI-generated material
- Online extortion using AI-faked images
During April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and related terms were mentioned, significantly more as many as in the same period last year.
Half of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including using chatbots for support and AI therapeutic applications.