The UK government has announced four new laws aimed at tackling the growing threat of AI-generated child sexual abuse material (CSAM), as recently reported by The BBC.
The Home Office says the UK will be the first country in the world to make it illegal to possess, create, or distribute AI tools designed to generate CSAM, with offenders facing up to five years in prison.
In addition, possessing AI-generated paedophile manuals -which provide guidance on using AI for child exploitation – will also become illegal, carrying a maximum sentence of three years.
Home Secretary Yvette Cooper described AI as “industrialising the scale” of child abuse, telling the BBC that it is “putting online child abuse on steroids.” She added that the government may need to take even stronger measures in the future.
The legislative changes will also make it an offence to operate websites that facilitate the sharing of child abuse material or provide grooming advice. Those found guilty could face up to 10 years in prison.
Additionally, Border Force officers will be given new powers to instruct individuals suspected of posing a sexual risk to children to unlock their digital devices for inspection when entering the UK. This is aimed at tackling CSAM filmed abroad, with offenders facing up to three years in prison, depending on the severity of the material found.
AI-generated CSAM can involve either fully artificial images or manipulated photos of real children, where software can “nudify” images or swap faces. Some cases even involve the use of real children’s voices, leading to further victimisation.
The National Crime Agency (NCA) reports that 800 arrests are made each month in relation to online threats against children. It estimates that 1.6% of the UK’s adult population—around 840,000 individuals—pose a risk to children, both online and offline.
While many welcome the new laws, some experts argue they do not go far enough. Professor Clare McGlynn, a legal expert on online abuse, supports the changes but highlights “significant gaps” in the approach. She has called for a ban on “nudify” apps and action against mainstream pornography sites that normalise sexual activity with young-looking actors.
The Internet Watch Foundation (IWF) has reported a 380% rise in cases of AI-generated CSAM, with 245 confirmed reports in 2024, compared to 51 in 2023. The IWF warns that these images are becoming more common on the open web and more difficult to distinguish from real abuse images. The interim chief executive of the IWF, external Derek Ray-Hill told The BBC.
“The availability of this AI content further fuels sexual violence against children.
It emboldens and encourages abusers, and it makes real children less safe. There is certainly more to be done to prevent AI technology from being exploited, but we welcome [the] announcement, and believe these measures are a vital starting point.”