The Federal Communications Commission is considering new regulations that would require political ads on TV and radio to disclose the use of artificial intelligence. Chairwoman Jessica Rosenworcel has called on other commissioners to support these rules amid concerns about the potential for AI-generated deepfakes to disrupt elections. The proposed regulations would apply to broadcast TV, radio, cable, and satellite providers, requiring political advertisers to make on-air disclosures and provide written disclosures in public files. This move aims to address a gap in the regulation of AI in political advertising, as existing election laws do not specifically address AI-generated content.

Existing US election law prohibits campaigns from fraudulently misrepresenting candidates or political parties, but it is unclear if this extends to AI-created content. Last summer, Republicans on the Federal Election Commission blocked a move to clarify this issue, while some lawmakers have proposed legislation to address AI in elections. The AI Transparency in Elections Act, introduced by Senators Klobuchar and Murkowski, could require AI disclaimers on political ads. Senate Majority Leader Schumer has emphasized the need for guardrails on AI in elections, but passing meaningful legislation during an election year may prove challenging.

In response to concerns about AI in politics, online platforms like Meta have implemented their own measures. Meta requires campaigns to disclose the use of deepfakes and has banned the use of its generative AI tools for political advertising. While these efforts by private companies can help mitigate the risks of AI in political ads, the FCC’s proposed regulations would provide a more comprehensive approach to transparency and accountability in this area. The FCC is seeking to make consumers fully informed about the use of AI tools in political ads, recognizing the growing accessibility of AI technology and the potential for misuse in election campaigns.

Rosenworcel’s proposal would open a rulemaking process at the FCC, which is expected to take months to complete. The rules would apply to traditional broadcast media but not internet-based platforms like streaming services or social media. By requiring disclosures in on-air and written formats, the FCC aims to ensure that viewers are aware when AI is used in political ads. This initiative aligns with broader efforts to address the influence of technology in elections and the need for transparency in campaign messaging. As the debate over AI regulation in politics continues, the FCC’s proposed rules represent a critical step towards enhancing accountability and safeguarding the integrity of electoral processes.

While the FCC’s proposed regulations focus on disclosure requirements for AI in political ads, broader questions remain about the legal framework governing AI-generated content in elections. The discussion around AI transparency and accountability in political advertising reflects a larger debate within the US government about the regulation of artificial intelligence. As lawmakers grapple with the complexities of AI technology and its implications for democracy, the FCC’s proposed rules mark a significant effort to address these challenges in the realm of political communication. By prioritizing consumer awareness and transparency, the FCC aims to protect electoral integrity and combat potential manipulation through AI-generated content.

Share.
Exit mobile version