UK Technology Companies and Child Safety Officials to Test AI's Capability to Generate Exploitation Images
Tech firms and child protection agencies will be granted permission to evaluate whether artificial intelligence systems can generate child exploitation images under new UK laws.
Substantial Rise in AI-Generated Illegal Content
The declaration coincided with revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the amendments, the government will permit designated AI companies and child safety organizations to inspect AI models – the underlying systems for chatbots and image generators – and verify they have sufficient safeguards to stop them from creating depictions of child exploitation.
"Ultimately about stopping abuse before it occurs," stated Kanishka Narayan, adding: "Experts, under strict protocols, can now detect the risk in AI systems promptly."
Tackling Legal Challenges
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and others cannot create such content as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to averting that problem by enabling to stop the creation of those materials at their origin.
Legislative Structure
The changes are being added by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on owning, creating or sharing AI models developed to generate child sexual abuse material.
Real-World Impact
This recently, the official visited the London headquarters of a children's helpline and heard a mock-up conversation to counsellors involving a report of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I learn about children facing extortion online, it is a source of extreme anger in me and justified anger amongst families," he stated.
Concerning Data
A prominent online safety organization stated that instances of AI-generated exploitation material – such as online pages that may contain numerous images – had more than doubled so far this year.
Instances of the most severe material – the gravest form of exploitation – rose from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a crucial step to guarantee AI products are secure before they are launched," stated the head of the internet monitoring organization.
"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a few clicks, providing offenders the ability to make potentially limitless quantities of advanced, photorealistic exploitative content," she continued. "Content which additionally commodifies survivors' trauma, and makes children, particularly female children, more vulnerable both online and offline."
Support Session Information
Childline also released details of counselling interactions where AI has been mentioned. AI-related risks discussed in the sessions comprise:
- Using AI to rate body size, physique and looks
- Chatbots dissuading young people from consulting safe guardians about abuse
- Being bullied online with AI-generated content
- Digital blackmail using AI-faked pictures
Between April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and related terms were mentioned, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellness, encompassing using chatbots for support and AI therapeutic apps.