A report indicates that 8,029 images and videos realistically depicting child sexual abuse, generated by AI, were identified, a 14% increase from the previous year. In particular, the number of videos increased by more than 260 times in one year.



A report published on March 24, 2026, by the Internet Watch Foundation (IWF) warned that realistic images and videos of child sexual abuse generated by AI are at record levels online. The report reveals the extent of the harm caused by AI-generated content and highlights growing public demand for action against unsafe AI tools.

AI child abuse at record high as public backs action on unsafe tools
https://www.iwf.org.uk/news-media/news/dangerous-ai-child-sexual-abuse-reaches-record-high-as-public-backs-clampdown-on-uncensored-tools/

The IWF identified a total of 8,029 AI-generated images and videos of child sexual abuse in 2025. This represents a 14% increase in criminal AI content compared to the previous year, highlighting the accelerating spread of AI-generated child sexual abuse material (AI-CSAM9).

What stands out most in this report is the increase in videos. The IWF identified 3,443 AI-generated child sexual abuse videos in 2025, a 26,385% increase from 13 in the previous year. Of these, 65% fell into the most serious category, Category A, exceeding the 43% of non-AI-generated illegal videos that were categorized as Category A. The IWF believes that AI is driving the generation of more extreme and serious content.



The IWF rejects the idea that AI-generated child sexual abuse images constitute a 'victimless crime,' arguing that 'generating AI models can learn from and refine existing abuse images, resulting in real victims being harmed again and again.' The IWF also acknowledges instances where perpetrators use AI to create even more extreme images based on previously widely circulated abuse images, warning that the abuse is not a past event but is continuously updated by AI.

Furthermore, the IWF explains that even AI-generated images can increase sexual interest in children, normalize violence, and increase the risk of physical sexual offenses. A survey cited in the report found that 44% of people who viewed images of child sexual abuse said they began to consider direct contact with children, and 37% actually sought that contact.

Furthermore, the IWF found in 2025 that 97% of illegally generated AI images, even those with recorded gender, depicted girls. The IWF argues that this trend is linked to online misogyny and the imposition of shame on girls. In fact, a 2024 survey reported that 59% of girls and young women aged 11 to 21 were worried about having fake images of themselves created by AI or being used for impersonation.



The IWF argues that the surge in such content is due to the significant reduction in the barrier to creation thanks to fine-tuning technologies like LoRA. The IWF points out that even realistic deepfake models resembling specific children can be created with just 20 to 30 images, and training can take as little as 15 minutes. Furthermore, the IWF found 202 individual models being shared on a dark web forum in December 2025.

In addition, the IWF mentions dangers beyond image generation tools. For example, AI chatbots, even if they don't actually generate images, pose a risk of providing technical advice on how to commit child sexual abuse or simulating sexual conversations with children. The IWF says it has observed users on dark web perpetrator forums using publicly available closed-source chatbots to get advice on how to implement and adjust open-source generation AI. Furthermore, the IWF asserts that Nudify tools, which generate nude images from clothed photos, are being misused to create obscene images of children despite being marketed for adults, and have no positive uses, only promoting shame, harassment, and exploitation.



The IWF considers AI-generated child sexual abuse content to be no longer a hypothetical threat, but a problem that is already causing real harm. Therefore, they urge governments and technology companies to thoroughly implement 'safety-by-design,' incorporating safety measures before releasing AI products, and to mandate model verification.

in AI,   Note, Posted by log1i_yk