Westfield Public Schools in New Jersey held a board meeting to address a troubling incident involving high school students using artificial intelligence software to create sexually explicit images of their classmates. Despite parents and students expressing concerns, the district has not taken significant public action to address the issue or update school policies related to exploitative A.I. use. With the rapidly evolving landscape of technology, many districts are struggling to effectively address the challenges posed by artificial intelligence and other emerging technologies available to students.
Instances of boys manipulating clothed photos of their female classmates using “nudification” apps have become a growing concern in schools across the United States. These digitally altered images, known as “deepfakes” or “deepnudes,” can have serious consequences, including mental health impacts, reputational harm, and threats to college and career prospects for the individuals targeted. The FBI has highlighted the illegal nature of distributing computer-generated child sexual abuse material, including realistic-looking A.I.-generated images of identifiable minors engaging in sexually explicit conduct.
Some schools, like Issaquah High School in Washington, have faced challenges in responding effectively to incidents involving exploitative A.I. apps. In one case, a police detective had to inform an assistant principal about the obligation to report sexual abuse, including possible child sexual abuse material, prompting a more proactive response from the school. Other schools, such as Beverly Vista Middle School in California, have taken a strong stance against the creation and circulation of explicit A.I.-generated images, resulting in student expulsions and public communication about consequences for such actions.
Parents and students, including Dorota Mani and her daughter Francesca, have been vocal in advocating for stronger policies and laws to address explicit deepfake incidents in schools. Despite efforts to raise awareness and push for policy changes, some schools have been slow to provide official reports and take disciplinary actions. Westfield Public Schools, for example, has faced criticism for the lack of transparency and communication regarding the incident involving deepfake images of students. While the district acknowledges the need to educate students and establish clear guidelines for responsible technology use, concerns remain about the adequacy of these efforts.
In response to incidents involving deepfake images, schools like Beverly Hills Unified School District have taken a proactive approach to address the misuse of artificial intelligence. By promptly communicating with parents, staff, and students and instituting severe disciplinary actions for those involved, schools seek to create a safe and secure environment for students. Dr. Michael Bregy, superintendent of Beverly Hills schools, emphasizes the importance of protecting students’ emotional safety in addition to physical safety, highlighting the urgent need for schools and lawmakers to take action to prevent further incidents of A.I. abuse.
As schools navigate the complexities of addressing exploitative A.I. apps and deepfake incidents, it has become evident that there is a pressing need for stronger policies, education, and enforcement mechanisms to safeguard students. By collaborating with law enforcement, enhancing communication with parents and students, and establishing clear guidelines for responsible technology use, schools can work towards creating a safer environment for all students. The evolving landscape of technology calls for proactive measures to mitigate the risks associated with explicit deepfake images and promote a culture of respect and ethical usage of A.I. technologies in educational settings.