Key Highlights:
- The UK Government will test whether an AI model has enough safety features to block users from generating child pornography.
- AI companies and LLMs will be held liable if the model does not pass the necessary safety criteria.
- UK is among the first countries in the world to bring a law to protect children from AI-generated adult-media content.
The United Kingdom Government is implementing a new law to restrict and prevent AI tools from generating sexually abusive images of children. Under the new law, an approved testing team will have the authority to evaluate whether a specific AI model or tool has the necessary safety features that prevents the usage of the tool from creating explicit content. The rule is being implemented after the rising number of complaints of AI-generated images of minor with adultery. Here’s everything that you need to know about UK’s new cyber laws.
AI Tools To Be Tested For Child Safety
The new law is an amendment to the Crime and Policing Bill of the United Kingdom government. It was observed by the Internet Watch Foundation (IWF) that the number of AI-generated child sexual abuse material (CSAM) have doubled over the last year, due to the rising availability of free AI tools that generate images. The agency identified and removed over 426 incidents of CSAM from January to October 2025. For the same period in 2024, the number of such instances were 199, reflecting a rapid surge.
Under the new regulations, the UK government will appoint an approved team of AI researchers and specialists who will be responsible for testing AI models and tools for their safety related to CSAM. However, the exact process of the selection of this team is not revealed at the moment. The law also does not share any details for the action which will be taken when a tool fails the test.
UK’s Technology Secretary, Liz Kendall, mentions that AI tools need to be made safe at the source. While the publishing and distribution of illegal explicit content is being controlled by agencies. it is necessary to ensure that AI tools do not generate such kind of media in the first place.
Also Read: OpenAI To Implement Teen Safety, Following California Teen Case Lawsuit
AI-Generated Child Abuse Content Now A Crime
Until now, criminal liability only applied to the generation and possession of illegal images, including CSAM. Hence, AI models often slipped under the radar of the law, even if they were involved in the process. Under the new rules, the AI tool itself becomes liable for any potential harm due to images or media generated by the tool. Hence, the developers and owners of the AI company have to ensure that incidents like CSAM do not happen in the first place.
Companies like Meta, OpenAI, Google, and many others have implemented multiple safety measures which blocks users from generating explicit content. However, due to the nature of LLMs, these functions can often be overridden with prompts, or even running open-source alternatives of the models. The new UK law covers all such LLMs and services, and hence is expected to bring a revolution in child-safety.
The United Kingdom is also among the first nations in the world to enforce a law of this kind. The changes are expected to applied globally in models, child pornography is a heinous crime in almost all nations worldwide.









