Justin Culmo was arrested for creating thousands of illegal images of children using an AI model Stable Diffusion at Disney World and a middle school in Florida. He has been indicted for child exploitation crimes, including abusing his two daughters, secretly filming minors, and distributing child sexual abuse imagery on the dark web. Despite not being charged with AI CSAM production, Culmo faces a jury trial in October.

Former Department of Homeland Security agent Jim Cole described Culmo’s activities as a targeted attack on the safety of children in communities and a violation of privacy. Culmo has been a high priority target for global law enforcement agencies since 2012. Facial recognition technology helped identify one of Culmo’s victims, and authorities found more child abuse images on his devices, including those of his daughters.

The case highlights the growing use of AI in transforming photos of real children into realistic images of abuse. Congress has filed charges against army soldier Seth Herrera for using generative AI tools to produce sexualized images of children and another individual for using Stable Diffusion to create CSAM from images of children solicited over Instagram. The Internet Watch Foundation reported detecting over 3,500 AI CSAM images online this year.

Stable Diffusion 1.5, a commonly used generative AI tool by pedophiles, allows offenders to run it on personal computers without storing images on provider servers where they might be detected. While Stanford researchers found that an early version of Stable Diffusion was trained on illicit images of minors, Stability AI claimed it was not responsible for version 1.5. As authorities work to prosecute AI CSAM creators, charges in cases of sexualized images of real children may be similar to standard CSAM cases, while entirely AI-generated images may be charged under American obscenity law.

Despite efforts to prevent AI misuse and prosecute offenders, the availability of tools like Stable Diffusion poses challenges for law enforcement. The Justice Department has taken a hard line on AI-enabled criminal conduct and plans to seek increased sentences where warranted. As the use of AI in creating child exploitation imagery continues to evolve, experts emphasize the importance of collaboration between tech companies, law enforcement, and non-profit organizations to protect children online and hold offenders accountable.

Share.
Exit mobile version