Editor's Note: The Proliferation of Dangerous Untruths
In November 2017, I wrote an editor’s note about new technology that would allow anyone to develop deepfake video and audio. I noted that “advancements in computer software might soon erase the line between truth and lies. Current controversy swirls around written material online, but audio and video editing technology could make it impossible to believe anything you read, hear, or see.”
With the proliferation of AI, the warning sirens have been getting louder over the past six years. However, in his article, “What the Doomsayers Get Wrong about Deepfakes,” Daniel Immerwahr disagrees that the end is nigh. He writes that “social media’s algorithmic filters are allowing separate groups to inhabit nearly separate realities. Stark polarization, meanwhile, is giving rise to a no-holds-barred politics. We are increasingly getting our news from video clips, and doctoring those clips has become alarmingly simple. The table is set for catastrophe. And yet the guest has not arrived.”
Immerwahr notes that creating convincing deepfakes is easier than ever, but he contends that “it’s just hard to point to a convincing deepfake that has misled people in any consequential way.”
Media forensics expert Walter Scheirer explores this phenomenon in his new book, A History of Fake Things on the Internet, when recounting the results of his research on the influence of deepfakes on U.S. presidential elections. Scheirer found that the evidence researchers “brought back to the lab wasn’t matched to what the government thought the problem was. There were no deepfakes and very few instances of scenes that were altered in a realistic way so as to deceive the viewer. Nearly all of the manipulated political content was in meme form, and it was a lot more creative than we expected.”
Digging deeper, Scheirer asked colleagues “about their experience with fake content on the Internet—all conceded that they hadn’t run into the active measures that we were all worried about. It was memes all the way down.”
Immerwahr clarifies that harmful information does spread on the Internet. “Distressing numbers of people profess a belief that COVID is a hoax, that the 2020 election was rigged, or that Satan-worshipping pedophiles control politics. Still, none of these falsehoods rely on deepfakes. There are a few potentially misleading videos that have circulated recently, such as one of Representative Nancy Pelosi slurring her speech and one of Biden enjoying a song about killing cops. These, however, have been ‘cheapfakes,’ made not by neural networks but by simple tricks like slowing down footage or dubbing in music.”
“Deepfake catastrophizing depends on supposing that people—always other people—are dangerously credulous, prone to falling for any evidence that looks sufficiently real. But is that how we process information?” asks Immerwahr.
“The most effective fakes have been the simplest. Vaccines cause autism, Obama isn’t American, the election was stolen, climate change is a myth—these fictions are almost entirely verbal,” Immerwahr notes. “They are too large to rely on records, and they have proved nearly impervious to evidence… We accept them because they affirm our fundamental beliefs, not because we’ve seen convincing video.”
As people continue to want to believe dangerous untruths about the world and one another, the workplace might have a calming influence. For the second year in a row, the Edelman Trust Barometer found that people trust their employers more than any other group, including government and the media.
According to an article on the study, the workplace is “where people feel safe to talk about the issues of the day, debate, and push for change from within. There’s a sense of control that comes with being able to decide if this or that employer is right for you. Savvy leaders will deploy processes to elevate employee voices and bolster connections between stakeholder groups.”
There’s a sense of control that comes with being able to decide if this or that employer is right for you.
Security professionals can help bolster these connections by maintaining the safety and security of the workplace. The authors in the January 2024 Security Management content package are addressing the persistent hold that terrorism and extremism has over our digital world. Part of the solution is protecting the workplace from the spread of such extremist concepts—from supporting mental health to bolstering a sense of community.
Though the catastrophic future predicted in 2017 has failed to arrive, people who want to believe conspiracy theories continue to do so. In fact, once these tales take hold, they are hard to shake. The opening example in the cover story for Security Management’s November 2017 issue discussed the shooting at the Comet Ping Pong pizza restaurant in Washington, D.C., by a conspiracy theorist who believed the owners were holding children hostage at the site. The story was an Internet hoax.
In November 2023, Elon Musk gave the old hoax new life, posting a meme that the child abuse trafficking was real and that an expert who debunked the original story had been imprisoned for possessing child pornography.
“Does seem at least a little suspicious,” wrote Musk. And that’s all it takes for a lie to be resurrected. No deepfake, and no AI, required.