State lawmakers across the country are enacting laws to regulate deepfakes in political campaigns in response to bipartisan concern over AI-generated election interference. More than a dozen states, both Republican- and Democrat-led, have passed legislation requiring disclosures in political ads with deepfake content. Some states have also passed laws allowing victims to seek court orders to stop the spread of deepfake content. Violators of these laws can face prison time or hefty fines, depending on the state.

While there are paths for candidates to challenge deceptive ads, experts are unsure if the current laws will be sufficient to combat deepfakes. The evolving nature of AI technology presents a unique challenge, as anyone with minimal technological knowledge can create convincing deepfakes. Arizona state Rep. Alexander Kolodin sponsored legislation allowing candidates to seek court orders declaring manipulated content as deepfakes, providing a tool to counter misinformation quickly spreading online.

Big Tech companies like TikTok, Meta, and YouTube have taken steps to moderate deepfake content, but federal action on the issue remains uncertain. Bills requiring clear labeling of deepfakes have been introduced in Congress, but there is little indication that lawmakers will address the issue before November. Without federal action, the responsibility to regulate AI in campaign ads falls to agencies like the Federal Election Commission and the Federal Communication Commission, with the former yet to issue any rules on AI-generated deepfakes.

At the state level, the task of regulating deepfakes in elections has led to numerous bills being introduced, with fights over their scope and reach. Not all bills have made it to governors’ desks, and in states like Georgia, there is currently no law preventing political ads with deepfakes from airing without disclosure. State efforts to combat harmful deepfakes include training election workers to recognize them and launching campaigns to educate voters on spotting deepfakes. State-led initiatives aim to complement existing disclosure laws and protect against potential election interference.

While state efforts to regulate deepfakes in campaigns continue to evolve, the lack of federal action remains a significant challenge. Public Citizen and other advocacy groups have pushed for federal legislation to address the issue, but progress has been slow. Agencies like the FEC and FCC have the authority to regulate AI in campaign ads but have yet to issue comprehensive rules. As the 2022 election approaches, questions remain about the effectiveness of current state laws and the ability of agencies to mitigate the impact of AI-generated election interference.

Share.
Exit mobile version