TrueMedia, a Seattle-based nonpartisan nonprofit, has developed an AI tool that can analyze social media posts such as images and videos for evidence of manipulation. This technology was made available to the public on Tuesday ahead of the U.S. elections. The web-based tool, which was initially released to journalists and fact-checkers earlier this year, allows users to share a social media post containing an image, video, or audio file. TrueMedia’s AI, in collaboration with existing deepfake detection tools, will then analyze the content in real time to identify any signs of manipulation. This move aims to provide the public with access to advanced deepfake detection technology to combat the spread of disinformation online.
Oren Etzioni, the founder of TrueMedia and a prominent computer scientist and AI specialist, emphasized the importance of providing tools to verify the authenticity of online content, especially during an election cycle marked by rampant disinformation. He highlighted the significance of making such advanced deepfake detection technology accessible to everyone, a capability that was traditionally reserved for government agencies. Recent incidents of deepfake manipulation, such as a fake post featuring pop star Taylor Swift endorsing Donald Trump, have underscored the urgency of combating the spread of misinformation through AI technology. TrueMedia’s AI tools seek to empower individuals to discern between real and manipulated content on social media platforms.
TrueMedia shared examples of deepfakes that its technology helped identify during major global events, including the detection of 41 AI content farm accounts that published thousands of videos garnering millions of views over a span of several months. In an effort to raise awareness and educate the public about the prevalence of deepfakes, TrueMedia released a quiz in June to test people’s ability to spot fake images, videos, and audio clips. The organization’s collaborations, such as the partnership with Microsoft’s AI for Good Lab, have aimed to enhance AI deepfake detection capabilities and equip individuals with tools to combat the rising threat of manipulated digital content.
Oren Etzioni has raised concerns about the potential repercussions of widespread deepfakes used to manipulate public opinion, labeling them as a form of “disinformation terrorism.” As the accessibility and sophistication of deepfake technology continue to evolve, the threat of misinformation targeting voters and influencing critical decisions has become increasingly prevalent. Etzioni, a University of Washington professor and former CEO of the Allen Institute for AI, highlighted the democratization of deepfake creation, referring to it as a tactic that was once exclusive to state actors but is now available to virtually anyone with access to the technology. The collaboration between TrueMedia and Microsoft’s AI for Good Lab reflects a commitment to using advanced AI tools to combat the proliferation of malicious deepfake content.
Microsoft President Brad Smith acknowledged the challenges posed by evolving deepfake technology and praised TrueMedia’s efforts to develop tools that harness the power of AI for positive impact. He emphasized the importance of utilizing ethical AI practices to counter the rise of harmful AI applications, labeling TrueMedia’s deepfake detection tools as a prime example of leveraging good AI to combat the spread of misinformation. As organizations continue to invest in technology-driven solutions to address the threats posed by manipulated digital content, collaborations and initiatives like the one between TrueMedia and Microsoft highlight the potential for AI to be a force for good in the ongoing battle against disinformation. By providing individuals with accessible tools to detect and verify the authenticity of online content, these efforts aim to empower users to make informed decisions and foster a more secure digital environment.


