Technology companies and child protection agencies will be granted permission to assess whether artificial intelligence systems can generate child exploitation material under recently introduced UK laws.
The announcement came as revelations from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the amendments, the government will allow designated AI companies and child protection organizations to examine AI systems – the foundational technology for conversational AI and image generators – and ensure they have adequate safeguards to stop them from creating images of child sexual abuse.
"Fundamentally about stopping abuse before it happens," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now detect the danger in AI models early."
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is designed to averting that problem by helping to halt the production of those materials at source.
The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or distributing AI systems developed to create child sexual abuse material.
This recently, the minister toured the London base of Childline and listened to a mock-up conversation to counsellors involving a account of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a explicit deepfake of himself, created using AI.
"When I hear about children facing blackmail online, it is a cause of intense frustration in me and justified concern amongst families," he stated.
A prominent online safety organization reported that cases of AI-generated exploitation material – such as online pages that may contain multiple images – had significantly increased so far this year.
Instances of category A material – the gravest form of abuse – increased from 2,621 images or videos to 3,086.
The legislative amendment could "represent a vital step to ensure AI products are safe before they are released," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a few clicks, providing offenders the capability to make potentially limitless quantities of sophisticated, photorealistic exploitative content," she added. "Content which additionally exploits survivors' suffering, and makes young people, particularly female children, more vulnerable both online and offline."
Childline also released details of counselling sessions where AI has been referenced. AI-related harms discussed in the sessions comprise:
During April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and associated topics were discussed, four times as many as in the same period last year.
Half of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using AI assistants for support and AI therapeutic applications.
Tech journalist and gadget reviewer with a passion for emerging technologies and consumer electronics.