Summarize this content to 2000 words in 6 paragraphs

An image of a (fake) dog posted on LinkedIn by Microsoft’s Brad Smith. (Microsoft / Real or Not?)

Doggone it, deepfakes are tough to sniff out.

Microsoft President Brad Smith got our attention this morning with a picture of a handsome dog on his LinkedIn. But he wasn’t just kicking off the new week with some simple clickbait.

RELATED: AI tools help journalists assess authenticity of images in immediate aftermath of Trump shooting

The image was Smith’s way of touting a new Microsoft-generated quiz to test if readers could determine the difference between real images and fake ones made with artificial intelligence.

Microsoft says it created the Real or Not quiz as part of a broader effort to help improve AI literacy. I once again found myself reduced to guessing on a number of the 15 images in each quiz (you can take it multiple times with fresh images). The best I could score was 67%.

A deepfake, as Microsoft defines it, is an AI generated image, video, or audio recording, typically used to spread false information. With advancements in AI technology, they’re getting harder and harder to detect, and that creates some concerning possibilities, especially as it relates to election integrity this fall.

A Real or Not? quiz image of some soldiers that was labeled a deepfake. (Microsoft / Real or Not?)

Microsoft created an educational resource page with information and tips to help people better deal with deepfakes, but none of the tips explicitly say, “a deepfake photo will almost always contain such and such giveaway.” Rather, the tips include:

Check and recheck your sources.

Check for accuracy before sharing or commenting on political and voting information.

Report suspected deepfakes and disinformation to social media authorities for review.

Keep your media literacy skills sharp as technologies keep developing.

Validate your voting plan with official government authorities.

That’s all fine advice, but will any of it help me spot fake Kamala Harris photos any better?

Some deepfake images come across as overly polished or often times the fingers on people will appear distorted. I keep staring at the dog at the top of this story, and about the only thing that might look “off” is its mouth, but that’s a stretch. And what should we be looking for in that photo of soldiers? Give us a chance, AI!

Microsoft and Smith also released a report last week titled “Protecting the Public from Abusive AI-Generated Content,” aimed at encouraging faster action against abusive AI-generated content by policymakers, civil society leaders, and the technology industry.

Similar quiz efforts have been launched recently by Seattle-based AI nonprofit TrueMedia and The New York Times, which released a quiz in January testing readers’ ability to identify real human faces and those generated by AI.

Share.
Exit mobile version