Meta, the parent company of Facebook, has announced significant changes to its policies regarding digitally created and altered media, in anticipation of upcoming elections that will challenge its ability to combat deceptive content produced by advanced artificial intelligence technologies. This includes introducing “Made with AI” labels for AI-generated content posted on its platforms and implementing separate labels for digitally altered media that pose a high risk of misleading the public on important issues. These changes will shift the company’s approach from removing limited posts to keeping them up while providing information about their creation.
The new labeling approach will specifically address content posted on Meta’s Facebook, Instagram, and Threads services, with platforms like WhatsApp and Quest virtual reality headsets following different rules. Meta will begin applying the more prominent “high-risk” labels immediately to content that falls into this category, ahead of the upcoming presidential election in November. Political campaigns are already using AI tools in countries like Indonesia, pushing the boundaries of existing guidelines set by Meta and other companies in this space.
The changes come after Meta’s oversight board criticized the company’s existing rules on manipulated media as incoherent, particularly after reviewing a video of President Biden posted on Facebook that had been altered to suggest inappropriate behavior. The current policy only addresses AI-generated content and videos that make people appear to say words they did not actually say. The oversight board recommended extending the policy to cover non-AI content that can be equally misleading, as well as audio-only content and videos showing individuals doing things they did not actually do.
Meta’s Vice President of Content Policy, Monika Bickert, explained that the company’s new approach to combating manipulated content will involve applying labels to digitally altered media that presents a high risk of misleading the public, regardless of whether it was created using AI or other means. The implementation of “Made with AI” labels will provide users with transparency about the origins of digitally created content, while the separate high-risk labels will highlight content that may materially deceive the public on important subjects. These changes aim to keep misleading content on the platforms while informing users about its nature.
Meta previously proposed a system to detect images created using other companies’ generative AI tools through invisible markers within the files. While the start date for this scheme was not initially disclosed, the company has now announced that this labeling approach will apply to content on Facebook, Instagram, and Threads. The more prominent high-risk labels will be applied immediately to content deemed to pose a high risk of material deception. These changes reflect Meta’s commitment to enhancing transparency and creating an environment that protects users from deceptive digitally altered media, especially during critical times such as elections.
Overall, Meta’s adjustments to its policies surrounding manipulated and digitally altered media demonstrate a proactive approach to addressing the evolving landscape of AI-generated content and misleading information. By implementing clear labeling systems and shifting towards informing users about the nature of such content, Meta aims to create a more informed and discerning online community. These changes come in anticipation of potential challenges during the upcoming elections and in response to feedback from Meta’s oversight board regarding inconsistencies in the handling of digitally manipulated media on its platforms.