UK Technology Firms and Child Safety Officials to Examine AI's Ability to Create Abuse Images
Tech firms and child safety agencies will receive permission to evaluate whether artificial intelligence tools can produce child exploitation images under new British laws.
Significant Increase in AI-Generated Illegal Material
The announcement coincided with revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the authorities will permit approved AI companies and child safety organizations to inspect AI models – the underlying technology for chatbots and visual AI tools – and verify they have adequate safeguards to stop them from producing images of child exploitation.
"Ultimately about preventing abuse before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict protocols, can now identify the danger in AI systems promptly."
Addressing Regulatory Obstacles
The amendments have been implemented because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at preventing that problem by helping to stop the creation of those images at their origin.
Legal Structure
The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, creating or distributing AI models developed to generate child sexual abuse material.
Real-World Impact
This week, the minister toured the London base of Childline and listened to a simulated conversation to counsellors featuring a account of AI-based exploitation. The call portrayed a adolescent requesting help after being blackmailed using a explicit AI-generated image of themselves, created using AI.
"When I hear about children experiencing extortion online, it is a source of extreme anger in me and justified concern amongst parents," he stated.
Concerning Statistics
A prominent online safety foundation reported that cases of AI-generated exploitation content – such as webpages that may contain multiple files – had more than doubled so far this year.
Instances of category A content – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, accounting for 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a vital step to ensure AI products are secure before they are launched," stated the chief executive of the online safety foundation.
"AI tools have enabled so victims can be targeted all over again with just a simple actions, providing offenders the capability to create possibly endless quantities of sophisticated, lifelike exploitative content," she continued. "Material which additionally commodifies survivors' trauma, and renders children, particularly female children, less safe on and off line."
Counseling Session Information
Childline also released information of counselling sessions where AI has been referenced. AI-related risks discussed in the sessions include:
- Using AI to evaluate body size, body and appearance
- Chatbots dissuading young people from consulting trusted guardians about abuse
- Being bullied online with AI-generated content
- Online blackmail using AI-manipulated pictures
During April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, including using chatbots for assistance and AI therapy apps.