The Federal Communications Commission (FCC) has proposed new rules that would require political advertisers to disclose their use of artificial intelligence in broadcast television and radio ads, in an effort to address concerns about misleading AI-generated media in political campaigning. The proposed rules aim to add transparency to the use of AI in political ads and inform voters about its potential manipulation of voices and images. However, there is uncertainty about whether these regulations will be in place before the upcoming November presidential election, as the move has faced pushback from the chairman of the Federal Election Commission.

Political candidates and parties in the US and globally have been using generative AI tools, with some voluntarily disclosing their use, while others have utilized the technology to mislead voters. The FCC’s proposal includes requiring broadcasters to ask political advertisers if their content was created using AI tools, such as voice-cloning software or text-to-image creators. Broadcasters would also need to make an on-air announcement when AI-generated content is used in political ads and disclose the use of AI in their online political files. However, the proposal does not cover streaming platforms, leaving digital political advertising unregulated at the federal level.

Following a 3-2 vote by the commission, the proposal will enter a 30-day public comment period and a 15-day reply period before finalization. It remains unclear if the rule will go into effect before the upcoming presidential election. Despite concerns raised by Republicans, Democrat FCC Chairwoman Jessica Rosenworcel intends to move forward with the regulatory process, emphasizing the urgency to address the issue of AI manipulation in political advertising.

Amidst the conflict between the FCC and FEC over jurisdiction, Rosenworcel’s proposed rule has received support from some Democrats, while Republicans have raised concerns about potential conflicts between the agencies and the timing of the regulations so close to the election. The FCC argues it has the authority to regulate the issue under existing laws, while advocacy groups and lawmakers have expressed the need for federal agencies to ensure voters can distinguish fact from fiction in political ads.

With Congress yet to pass laws directing agencies on regulating AI in politics, some Republican senators have circulated legislation to block the FCC’s new rules. The FEC is also considering its own petition on regulating deepfakes in political ads. Despite the lack of federal action, more than one-third of states have enacted their own laws regulating the use of AI in campaigns and elections. The FCC’s previous ruling on AI-generated robocalls highlights the commission’s efforts to address the misuse of AI technology in political communication, underscoring the need for comprehensive regulations in the era of AI-driven disinformation.

As the regulatory landscape for AI in politics continues to evolve, the FCC’s proposed rules represent a step towards greater transparency and accountability in political advertising. The ongoing debate between federal agencies, lawmakers, and advocacy groups underscores the complex challenges posed by AI manipulation in elections, highlighting the need for a multi-faceted approach to address the growing threat of misleading AI-generated content in political campaigns.

Share.
Exit mobile version