top of page
  • Darshan Pareek

Internet becoming awash with AI-generated child sexual abuse material, claims report



More than 11,000 images of children that can be construed a criminal were produced using artificial intelligence within a period of one month, the Internet Watch Foundation (IWF) said in a report.


The IWF found 20,254 AI-generated images posted on a dark web child sexual abuse material, or CSAM, forum within a one-month period. Of these, 11,108 images were selected for assessment by IWF analysts, focusing on those judged most likely to be criminal under UK laws.


The assessment revealed that 2,562 images were considered criminal pseudo-photographs, and 416 were deemed criminal prohibited images.


IWF analysts dedicated a combined total of 87.5 hours to assess these images under the Protection of Children Act 1978 and the Coroners and Justice Act 2009.


IWF reported an increase in self-generated child sexual abuse material (CSAM), featuring children under 10, on over 100,000 webpages in the past year.


The Stanford Internet Observatory issued a report last year, revealing the discovery of over 3,200 images suspected to be related to child sexual abuse within the AI database known as LAION.



The Internet Watch Foundation (IWF) is a non-profit organization dedicated to combating the proliferation of child sexual abuse material (CSAM) on the internet.


Established in 1996, the IWF operates as an independent body in the United Kingdom, working collaboratively with industry partners, law enforcement, and the public to identify and remove illegal content.


The AI utilized text-to-image technology, allowing users to input content into online generators, has resulted in the rapid creation of realistic images.


The body's chief executive Susie Hargreaves told Guardian, "Ten years ago we hadn’t seen self-generated content at all, and a decade later we’re now finding that 92% of the webpages we remove have got self-generated content on them."


She added, "That’s children in their bedrooms and domestic settings where they’ve been tricked, coerced or encouraged into engaging in sexual activity which is then recorded and shared by child sexual abuse websites."

The report said that perpetrators can legally download tools to create these images offline, making detection challenging.


It report stressed that most AI CSAM found is now realistic enough to be treated as 'real' CSAM, and the most convincing AI CSAM is visually indistinguishable from real CSAM.

Comments


bottom of page