(Stick with me here; this isn’t just another op-ed whining about social media echo chambers, I promise.)
I’m not alone, and this isn’t just an American problem. Studies like this one have demonstrated quantitatively that on social media, users tend to promote their favored narrative on divisive issues, and then form polarized groups. Within these social groups, constant sharing of information that supports the preferred narrative drives rampant confirmation bias, leading people to ignore or dismiss alternative viewpoints and refutations of their preferred narrative – even if those refutations are legitimate.
And it’s not just social media. With the rise of internet media, everyone is free to pick and choose news sources that provide exactly the kind of news they want to read. The result, critics have suggested, is that different political factions are working with different realities. This has been clearly evident in this year’s American presidential election; the two sides don’t just disagree on the issues anymore, they disagree on the facts. They disagree on reality.
Science and technology: Committee pushes for research budget
It’s about to get much worse
If you think this polarization of reality is bad now, you’d better strap in. Technological innovation is going to make it much worse by facilitating the manufacture of alternate realities.
Consider: the internet already does an incredibly efficient job of spreading fake news to receptive audiences. And the more “evidence” there is to support that fake news, the stronger a hold it has. For example, in the US, Trump supporters have clung to the story that Democratic candidate Hillary Clinton wore an earpiece during various events (she didn’t) because of several photographs that have been widely circulated.
Photographs can be photoshopped, so when they’re presented as evidence, they must be scrutinized carefully. The best photoshops are indistinguishable from real photos for nearly everyone – trained pros might be able to spot the differences, but most of the public will fall for the forgery easily.
Now, that same technology is coming to both video and audio.
With video, for example, there’s Face2Face, the work of some German researchers, which allows for very realistic facial expression replacement in real time.
On the audio side, there’s technology like “Project VoCo,” a new audio-editing feature Adobe is working on that can synthesize a realistic recording of any human voice saying anything you want. You just give it a source recording of 20-plus minutes of the person’s speech, and then type what you’d like that person’s voice to say. “The algorithm does the rest and makes it sound like the original speaker said those words,” Adobe said in a statement.
Neither of these technologies has been perfected yet – in fact, neither is yet available to the public. But looking at them both, it’s easy to imagine that in just five or ten years, it will be possible for anyone to produce a convincing-looking video of a political figure saying things they didn’t say.
Much of this will be easily identifiable as bullshit, of course. That video of a local political figure saying he likes to eat puppies and punch babies? Definitely fake. But there will be subtler, more convincing fakes that will be harder to disprove. That could lead to disaster in a world where people already only embrace any news that supports their narrative. Just imagine how much worse the echo-chamber effect can get when people can easily manufacture photos, videos, and audio recordings to support their position. Disproving these false recordings could be very difficult, but in the build-your-own-reality era, I’m not sure that proof will matter anyway.
Encouraging start-ups: Moot on entrepreneurship, technology begins
Pick your reality
Say, for example, a damning audio recording of a politician talking “behind closed doors” emerges. The politician insists that the recording is a fake. The politician’s supporters believe he’s innocent and point to the recording as proof that his opponents are liars and forgers; his opponents see the recording as legitimate proof of the politician’s guilt. And what can the politician do? Even if the recording is fake, that’s going to be impossible to prove. Proving that you didn’t say something is already near-impossible, and it will be completely impossible once anybody’s voice can be convincingly faked. Even if the politician happened to have their own recording to demonstrate that the damning audio clip had been doctored, his opponents would just argue that the damning clip is the real one and the politician’s clip is doctored.
Yes, days or weeks down the line, audio experts will probably be able to work out which recording is legitimate, but that won’t matter. Both sides will have already embraced the recordings as evidence of their position and moved on to the next news story.
Don’t get me wrong; this technology is really cool, and it has a lot of not-so-horrifying use cases. Technology like Face2Face could be very useful for the film industry, and tech like Project VoCo will make all sorts of audio editing jobs much, much easier. But I worry that when this technology matures, it will be a great leap towards the total politicization of “objective” truth.
And in a world where everyone – every political party, every country, every leader – can easily manufacture convincing evidence to feed the confirmation bias on their side, I wonder how the hell we’re all going to live together.
This article originally appeared on Tech in Asia.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ