🔗 Share this article British Tech Firms and Child Safety Agencies to Test AI's Capability to Create Abuse Content Technology companies and child protection organizations will be granted permission to assess whether artificial intelligence tools can produce child abuse material under new British laws. Substantial Rise in AI-Generated Illegal Content The announcement coincided with revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025. New Legal Framework Under the changes, the authorities will permit approved AI companies and child protection organizations to inspect AI systems – the foundational systems for chatbots and image generators – and verify they have adequate safeguards to prevent them from producing depictions of child exploitation. "Fundamentally about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Experts, under strict conditions, can now detect the danger in AI systems promptly." Tackling Legal Challenges The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it. This law is designed to preventing that problem by helping to stop the production of those materials at their origin. Legal Structure The changes are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on owning, producing or distributing AI systems designed to create child sexual abuse material. Practical Impact This recently, the minister toured the London headquarters of a children's helpline and listened to a mock-up call to advisors involving a account of AI-based exploitation. The interaction depicted a teenager requesting help after facing extortion using a explicit AI-generated image of himself, created using AI. "When I learn about children facing extortion online, it is a source of extreme frustration in me and justified concern amongst parents," he said. Alarming Data A leading internet monitoring organization stated that instances of AI-generated exploitation content – such as webpages that may include numerous images – had more than doubled so far this year. Cases of the most severe content – the gravest form of abuse – increased from 2,621 images or videos to 3,086. Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025 Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025 Industry Response The legislative amendment could "represent a vital step to ensure AI tools are safe before they are released," stated the chief executive of the online safety organization. "AI tools have made it so victims can be targeted all over again with just a few clicks, providing offenders the ability to create possibly endless amounts of advanced, lifelike exploitative content," she added. "Material which additionally exploits victims' suffering, and makes children, particularly girls, less safe on and off line." Counseling Interaction Information The children's helpline also released details of support sessions where AI has been referenced. AI-related risks mentioned in the sessions include: Using AI to evaluate body size, body and appearance Chatbots dissuading young people from consulting trusted adults about abuse Facing harassment online with AI-generated content Online blackmail using AI-manipulated pictures Between April and September this year, the helpline delivered 367 support sessions where AI, conversational AI and related terms were discussed, four times as many as in the same period last year. Half of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including using AI assistants for assistance and AI therapy applications.