Microsoft unveils new tool to detect artificially manipulated media

Microsoft Video Authenticator can analyse a still photo or video to detect deepfakes


Reuters/Tech Desk September 02, 2020
PHOTO: REUTERS

Microsoft has developed new technology to combat disinformation spread by deepfakes. Microsoft Video authenticator authenticator can analyse and detect artifitially manipilated media.

"Today, we’re announcing Microsoft Video Authenticator. Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated. In the case of a video, it can provide this percentage in real-time on each frame as the video plays," Microsft announced on their blog.

Deepfakes are computer-generated images in which content has been manipulated without the viewer detecting any foul play.

Elaborating on what motivated the tech giant to launch the tool, Microsoft said: "Disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate. At Microsoft, we’ve been working on two separate technologies to address different aspects of the problem."

"One major issue is deepfakes, or synthetic media, which are photos, videos or audio files manipulated by artificial intelligence (AI) in hard-to-detect ways. They could appear to make people say things they didn’t or to be places they weren’t, and the fact that they’re generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming US election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes."

Video Authenticator was created using a public dataset from Face Forensic++ and was tested on the DeepFake Detection Challenge Dataset, both leading models for training and testing deepfake detection technologies.



 

“How does the Video Authenticator tool work? It detects the blending boundary of a #deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”

Microsoft Video Authenticator can analyse a still photo or video to provide a percentage chance or confidence score, that the media is artificially manipulated.

Netflix now lets you watch some of its original shows for free

In the case of a video, it can provide this percentage in real-time on each frame as the video plays the detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.

Microsoft’s new technology is split into two components, first is a tool built into Microsoft Azure that allows a content producer to add digital hashes and certificates to a piece of content.

The hashes and certificates then live with the content as metadata wherever it travels online.

Apple may be planning to launch its own search engine

The second component has created a reader that is designed to look for any evidence that the fingerprints have been affected by third-party changes to the content.

The tech giant has also announced a separate system to help content producers add hidden code to their footage so any subsequent changes can be easily alerted.

With US elections around the corner, the new technology will be particularly helpful in detecting manipulated content.

Recently, Twitter also used its new “manipulated media” label for the first time on a video clip of US Democratic presidential candidate Joe Biden that was retweeted by President Donald Trump.

The clip shows a TV interview during which Biden appeared to be falling asleep. But it was fake - the clip of the host was from a different TV Interview and snoring effects had been added.

Further, Google has also stepped up efforts to battle "deepfakes" by releasing new data to help researchers detect videos manipulated by artificial intelligence.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ