President Joe Biden’s administration is taking steps to address the growing issue of abusive sexual images created using artificial intelligence technology. These realistic images, known as deepfakes, can be generated using AI tools and shared online without the victim’s consent. This problem primarily affects women and girls, with teenagers and members of the LGBTQ+ community being particularly vulnerable. The White House is calling for voluntary cooperation from tech companies, financial institutions, and other entities to help curb the creation, distribution, and monetization of these nonconsensual AI images.

The administration is urging not only AI developers but also payment processors, financial institutions, cloud computing providers, search engines, and mobile app stores to take action against image-based sexual abuse. Companies are being asked to disrupt the monetization of explicit images, especially those featuring minors. While many platforms claim not to support such content, enforcement of these policies can be inconsistent. Cloud services and app stores are encouraged to restrict access to services that facilitate the creation or alteration of sexual images without consent. Survivors of AI-generated or real nude images should have easier means to have such content removed from online platforms.

Schools in the U.S. and other countries are also dealing with the issue of AI-generated deepfake nudes of their students. Incidents have involved teenagers manipulating images and sharing them among peers. Last summer, major technology companies made voluntary commitments to implement safeguards for AI systems before release, following an executive order by President Biden aimed at guiding responsible AI development. There is a bipartisan push in Congress to allocate funding for AI research and development, but legislation is still needed to provide comprehensive safeguards against AI-generated child abuse imagery.

While current laws already criminalize the creation and possession of sexual images of children, even if they are fake, there is limited oversight over the tech tools facilitating the creation of such images. Some commercial websites hosting these tools lack transparency in terms of ownership and technology. The use of AI image-generator tools like Stable Diffusion has led to thousands of AI-generated child sexual abuse images. The open-source nature of some AI models makes it difficult to control their use once released. The White House Gender Policy Council emphasizes the need for both voluntary commitments from companies and legislative action to combat the spread of AI-generated abusive imagery.

The issue of AI-generated sexual abuse images extends beyond the use of open-source technology, highlighting a broader problem affecting online platforms and services. The rapid advancement of generative AI tools has led to an increase in nonconsensual imagery, posing significant risks to individuals’ privacy and safety. The administration’s efforts to engage the private sector in addressing this issue are a step in the right direction, but a comprehensive legislative framework is essential to effectively combat the proliferation of AI-generated abusive content. The push for greater accountability and responsibility from tech companies and financial institutions is crucial in protecting vulnerable individuals, especially women, girls, and minors, from the harmful effects of nonconsensual AI imagery.

Share.
Exit mobile version