Research shows people trust AI-generated deepfake faces more

Research shows that people prefer AI-generated deepfake images of people rather than real faces images.


Tech Desk February 25, 2022
PHOTO: REUTERS

New research suggests that AI-generated faces don’t fool humans into believing they aren’t real, but instead earn more human trust than other human faces.

Nvidia’s AI technology that produced realistic photos of people that didn’t exist wowed people in 2018. Relying on an algorithm called generative adversarial network (GAN), the AI could produce really good images of counterfeits.

The technology has come a long way since 2018, becoming more enhanced and better at producing images, making it impossible to use AI to spot deepfake images at the same time.

A study in PNAS revealed that participants who were randomly guessing which photos they thought were fake, were unable to detect the fakes apart from the real ones. Additionally, the participants rated the fake AI-generated faces as more trustworthy than the real images. The authors of the study wrote that “Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.”

Deepfakes often have some characteristic defects and glitches that can draw them apart from the real image, and once the flaws were pointed out to participants in the study, they became better at identifying deepfakes. However, they still rated the deepfakes 8% more trustworthy than the images of real people.

The researchers of the study warned the disastrous effects this kind of technology can have since many f the deepfakes resemble average faces, and asked the developers of this technology to build security around deepfake images to ensure they remain as counterfeit and are not unlawfully used. According to the authors of the study, “Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application”

 

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ