California has implemented some of the toughest laws in the United States to combat election deepfakes ahead of the 2024 election, with Governor Gavin Newsom signing three landmark proposals this week. These laws ban the use of AI to create and circulate false images and videos in political ads close to Election Day, with one law allowing individuals to sue for damages over election deepfakes immediately, and another requiring large online platforms to remove deceptive material starting next year. However, these laws are being challenged in court by a conservative activist who claims they censor free speech and allow legal action over disliked content.
The lawsuit, filed by an individual who created parody videos featuring altered audios of Vice President Kamala Harris, states that the laws infringe on free speech rights and allow anyone to take legal action over content they disagree with. The governor’s office has clarified that the laws do not ban satire and parody content but require disclosure of the use of AI in altered videos or images. The lawsuit represents one of the first legal challenges over such legislation in the U.S. and the attorney representing the complainant plans to file another lawsuit over similar laws in Minnesota, as state lawmakers in multiple states have introduced similar proposals due to the rise of AI-enhanced election disinformation globally.
Among the laws signed by Governor Newsom, one immediately prevents deepfakes surrounding the 2024 election by targeting materials that could affect voting and misrepresent election integrity. The law prohibits creating and publishing false materials related to elections within 120 days before Election Day and 60 days thereafter, with the ability for courts to stop distribution of such materials and potential civil penalties for violators. Critics, including free speech advocates and Elon Musk, have argued that the law is unconstitutional and violates the First Amendment, with Musk posting an AI-generated video featuring altered audio on social media after the laws were signed.
The effectiveness of these laws in combatting election deepfakes is unclear, with concerns about the slow pace of court proceedings against rapidly disseminated fake images and videos. Public Citizen, a consumer advocacy organization, notes that the laws have not been tested in court and that court orders to stop distribution of content could take several days, potentially allowing for damaging misinformation to spread. However, having such laws in place could act as a deterrent for potential violators, according to Ilana Beller of Public Citizen, as taking down content quickly can limit its spread and impact on elections.
Assemblymember Gail Pellerin, the author of one of the laws, emphasized that it is a simple tool to address misinformation by requiring digitally altered videos to be marked for parody purposes. Newsom also signed another law to mandate campaigns to disclose AI-generated materials starting in 2025, further emphasizing transparency in political advertising. The ongoing legal challenges and debates surrounding these laws highlight the complexities of regulating deepfakes in the digital age and balancing free speech rights with protecting the integrity of elections.