UK Tech Firms and Child Protection Officials to Examine AI's Capability to Generate Abuse Images
Technology companies and child protection organizations will be granted authority to evaluate whether AI tools can generate child abuse material under new UK laws.
Significant Increase in AI-Generated Illegal Material
The announcement came as revelations from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the changes, the authorities will permit designated AI developers and child protection organizations to inspect AI models – the underlying technology for conversational AI and visual AI tools – and verify they have sufficient safeguards to prevent them from creating images of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now detect the danger in AI models early."
Tackling Regulatory Obstacles
The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was published online before addressing it.
This law is designed to preventing that issue by enabling to halt the production of those materials at their origin.
Legislative Framework
The amendments are being added by the government as modifications to the criminal justice legislation, which is also implementing a ban on owning, producing or sharing AI systems designed to create exploitative content.
Real-World Consequences
This week, the minister toured the London base of Childline and heard a mock-up call to counsellors involving a account of AI-based exploitation. The interaction portrayed a teenager requesting help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.
"When I hear about children experiencing blackmail online, it is a source of intense anger in me and justified concern amongst families," he stated.
Alarming Statistics
A leading internet monitoring foundation reported that instances of AI-generated abuse material – such as online pages that may contain multiple images – had more than doubled so far this year.
Instances of category A content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, making up 94% of illegal AI images in 2025
- Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a crucial step to guarantee AI tools are safe before they are released," stated the head of the internet monitoring organization.
"AI tools have enabled so survivors can be victimised repeatedly with just a simple actions, giving offenders the ability to make possibly limitless quantities of sophisticated, lifelike exploitative content," she added. "Material which further commodifies victims' suffering, and renders young people, especially female children, more vulnerable on and off line."
Support Interaction Data
Childline also released information of support interactions where AI has been mentioned. AI-related risks discussed in the sessions include:
- Using AI to evaluate body size, body and looks
- AI assistants discouraging young people from consulting safe guardians about harm
- Being bullied online with AI-generated content
- Digital extortion using AI-faked pictures
During April and September this year, Childline delivered 367 support sessions where AI, chatbots and related terms were mentioned, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, including using chatbots for support and AI therapeutic apps.