Putting Generative AI to Use for Crime: Fraud, Disinformation, Exploitation, and More
Fake news, manipulated media, deepfakes—disinformation continues to evolve online, fueled by increasingly sophisticated generative artificial intelligence (gen AI) tools. Deepfakes in particular have been used to imitate individuals—whether in video or audio form—to achieve a number of criminal aims.
While intentional reputational damage toward organizations is certainly possible with deepfakes, malicious actors more frequently aim to score more tangible and immediate profits or inflict damage on an individual.
According to George Vlasto, head of product at Resolver, speaking during a presentation on AI risks and opportunities at the 2024 Resolver Ascend security summit, AI models can enable a number of threats, including:
- Creation of malware. AI is great at quickly coding, enabling malicious actors with minimal technical skills to scale and change malware. AI tools can hallucinate code, though, meaning that malware or vulnerabilities can be accidentally created by legitimate coders using AI as a copilot.
- Altered reality. Adversaries can edit video or audio very realistically to change a message, such as by lip-syncing incorrect audio over an existing video or clipping out a key phrase without a visible cut in video.
- Social engineering. AI enables adversaries to hyper-personalize phishing and emotional manipulation attempts by copying known voices, tones, cues, or faces.
- Deepfakes. Disinformation and political unrest can spread very quickly over social media when fed with convincing video or audio that feeds a narrative, Vlasto says. Over time, this makes people believe that cannot trust anything they see—especially online—which can undermine societal ties and trust in institutions.
Although fake news is nothing new, Vlasto adds, the transmission speed and low barrier to entry from generative AI tools is revolutionary, meaning deepfakes and misleadingly altered media can spread instantly with no additional cost. While deepfake detection technology is racing to keep up with itself, the main line of defense is—as usual—an informed and alert populace.
“The vector of AI-based attacks is well-understood: people,” Vlasto says. Deepfake-based scams try to convince people to deviate from mandated security protocols. This means the defense is the same as it has always been: training, education, and well-understood procedures.
In that vein, it’s worthwhile to dive into some of the key areas where deepfakes and manipulated media have been put to malicious use recently.
Fraud and Scams
Why do people make deepfakes? For the most part, it’s all about the money. Scammers use deepfakes of celebrities to promote fake products, seeking to quickly boost profits with the buzz of a viral video. Fraudsters also use generative AI tools to create audio deepfakes to trick people into believing they are speaking to someone legitimate, such as when a design and engineering firm employee in Hong Kong was tricked into transferring $25 million to scammers after a deepfake call impersonated the company’s chief financial officer and multiple other staffers.
Finance services companies are particularly targeted so that scammers can access both money and data. Deloitte’s 2024 Financial Services Industry Predictions report found that generative AI poses the biggest threat to the financial industry today, potentially enabling fraud losses to reach $40 billion in the United States by 2027, compared to $12.3 billion in 2023.
“Some fraud types may be more vulnerable to generative AI than others,” the report said. “For example, business email compromises, one of the most common types of fraud, can cause substantial monetary loss, according to the FBI’s Internet Crime Complaint Center’s data, which tracks 26 categories of fraud. Fraudsters have been compromising individual and business email accounts through social engineering to conduct unauthorized money transfers for years. However, with gen AI, bad actors can perpetrate fraud at scale by targeting multiple victims at the same time using the same or fewer resources. In 2022 alone, the FBI counted 21,832 instances of business email fraud with losses of approximately US$2.7 billion. The Deloitte Center for Financial Services estimates that generative AI email fraud losses could total about US$11.5 billion by 2027 in an ‘aggressive’ adoption scenario.”
Beyond large-scale frauds targeting industries, AI-enabled fraud attempts can hit individuals as well. While deepfake videos can be complicated to create and easier for people to spot as fraudulent, audio deepfakes are cheap—requiring only a few seconds of someone’s voice to generate something convincing, according to a 2024 report from MIT Technology Review.
Scammers have used this accessible technology to send distressing messages and calls impersonating a family member who has been kidnapped and needs ransom money. These virtual kidnapping scams play on the victim’s emotions to secure a quick payout.
In one case in 2023, Jennifer DeStefano got a phone call from an unknown number, and upon picking up, she heard yelling and sobbing that sounded just like her daughter’s voice, down to the inflection. Then, a man’s voice told her that he had kidnapped her daughter, threatened to drug and assault the girl, and demanded $1 million in ransom money. DeStefano managed to contact her husband, who confirmed that their daughter was with him and unharmed. But the voice on the end of the line was terrifyingly realistic, and DeStefano told journalists that she was convinced scammers had cloned her daughter’s voice using AI tools, gleaning the necessary audio clips from social media accounts and videos.
The Federal Trade Commission (FTC) warned, “A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member’s voice—which he could get from content posted online—and a voice-cloning program. When the scammer calls you … (it will) sound just like your loved one.”
So, what can you do? Just like in DeStefano’s case, the FTC advises, don’t trust the voice on the other end of the line. “Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can’t reach your loved one, try to get in touch with them through another family member or their friends. Scammers ask you to pay or send money in ways that make it hard to get your money back. If the caller says to wire money, send cryptocurrency, or buy gift cards and give them the card numbers and PINs, those could be signs of a scam.”
Consumers reported 52,000 instances of scammers impersonating Best Buy or its Geek Squad tech support brand in 2023, far more than the second most-impersonated brand, Amazon (34,000 reports), according to a new FTC report. https://t.co/LxOPTX4ICN
— Security Management (@SecMgmtMag) May 29, 2024
Election Influence
2024 is a whirlwind of elections worldwide. People in at least 64 countries will have cast ballots in national elections this year, and some of those races are particularly contentious. Risk specialists at the World Economic Forum (WEF) noted in their Global Risks Report 2024 that misinformation and disinformation are significant high-profile, near-term risks, and they are linked to societal polarization, terrorist threats, armed interstate conflict, and the erosion of human rights. Technology-enabled disinformation can make all these factors worse.
Because AI tools are becoming easier to use, they are more accessible to a wide variety of people who can quickly create professional-looking images and communications. These AI models “have already enabled an explosion in falsified information and so-called ‘synthetic’ content, from sophisticated voice cloning to counterfeit websites,” the WEF report explained.
Like most technology-enabled influence campaigns and disinformation, “Some of it we can control, some of it we cannot control,” says Shamla Naidoo, head of cloud strategy and innovation at Netskope and an educator with IANS focusing on technology law and policy. “My primary go-to is elevate the IQ of the voting public, make them aware of the issues, give them resources to guide their thinking and their decision-making, educate them about the issues and the challenges that they might face that might make them a victim, and really empower the voter to become more aware of the world around them and the scams that do exist versus telling them how to vote.”
Deepfakes in election cycles are less about hacking the direct voting technology or system, but instead they aim to subvert opinions about how fair elections are or besmirch specific candidates.
“This idea of psychological manipulation is an age-old spycraft type of technique,” Naidoo adds. “Nothing here is new. I think the technology, though, enables people to do it much faster, to do it much easier, and perhaps most importantly, to do it for next to no money. That is accelerating the threat exponentially, simply because the barriers to entry have almost all disappeared, if there are any even left.”
Chinese actors and bots bombarded Taiwan with fake news and deepfake disinformation before the island nation’s elections in January, attempting to sway voters toward Beijing-friendly candidates, Politico reported. Deepfake videos were used as a personality assassination tool, including alleging that a presidential candidate had multiple mistresses.
In India, a high-stakes election was complicated further by a wave of confusing deepfakes—some of them obvious, crude, or jokey, and some of them uncannily convincing. AI lip-synced clips portrayed politicians as resigning from their parties or celebrities endorsing candidates, layering AI-cloned voices on top of authentic video footage, The New York Times reported.
The United States presidential election is ramping up as well, with a highly volatile race between Joe Biden and Donald Trump in the spotlight. Technology and online disinformation are already playing a notable role in the struggle for attention and votes.
In May 2024, a Louisiana political consultant was indicted for a fake robocall imitating U.S. President Joe Biden, using a fake version of his voice to dissuade people from voting for him in New Hampshire’s Democratic primary election. Separately, the U.S. Federal Communications Commission (FCC) proposed to fine the consultant $6 million for the calls, which allegedly used AI-generated deepfake audio of Biden’s voice, saying the calls violated FCC rules about inaccurate caller ID information.
The Center for Countering Digital Hate (CCDH) tested dozens of text prompts related to the 2024 U.S. presidential election on four AI image generators, and in 41 percent of cases, the tools generated convincing election disinformation images, such as of candidates sitting sadly in jail cells or lying sick in a hospital bed. The tools were most susceptible to generating images promoting election fraud or intimidation, such as showing misleading images of ballots in the trash or riots at polling places.
“This is a particularly concerning trend considering that claims of election fraud and voter intimidation ran rampant in the last U.S. election,” the CCDH report said. “The potential for such AI-generated images to serve as ‘photo evidence’ could exacerbate the spread of false claims, posing a significant challenge to preserving the integrity of elections.”
Individuals do have resources to help detect and flag misleading or manipulated content, though, but it requires them to slow down and be skeptical, Naidoo says.
“The consumer is so much more powerful now than they have ever been because there's a lot of information that comes at us, but there's also a bunch of resources that we need to share with people so that they can figure out whether something is fact and separate that from fiction,” she says.
“I think for the consumer, they have to meet every single piece of information that goes by their desk or by their mobile device or by their screen with skepticism,” Naidoo continues. “They should doubt everything they see. I'd say there's a couple of questions as a consumer that I would ask, which is, where did this information come from? What was the origin? Do I know and trust the origin? Because things can look authentic.”
She recommends consumers ask some pointed questions, especially when evaluating election- or politics-related content online:
- Who is the originator of this information? Do I understand who authored it?
- What is the author’s or creator’s interest in this particular material? Are there conflicts of interest?
- Is there only one perspective, or are there other perspectives that may exist elsewhere that I should go look for?
- How should I triangulate the information to determine how I feel about it?
- When I read this, what do others expect me to do with this information? Do they expect me to change my mind, vote differently, or provide support in a different way or place?
- Looking at this information, will this change my behavior? Should it?
“Before we make decisions, we should be asking a bunch of questions, almost as a way to pause and think harder about the information I have and what actions I should take,” Naidoo says. “Do I need to take this action right now or should I wait till tomorrow to take that action? Because everything is so instant.”
Geopolitical Disinformation
Deepfakes and manipulated media are just another tool in nation-states’ arsenal when it comes to influencing public opinion, even outside of elections.
A day after U.S. officials said Ukraine could use American weapons in limited strikes within Russian borders, a fabricated video showed a U.S. State Department spokesman suggesting that the Russian city of Belgorod was a legitimate target for strikes, The New York Times reported in late May. The 49-second video clip showed telltale signs of manipulation—including an off-time lip-sync and a shirt that changed color throughout the video—but the video circulated quickly on Telegram channels, reaching Belgorod residents and eliciting responses from Russian government officials.
In a statement, Matthew Miller—the State Department spokesman imitated in the video—said, “The Kremlin has made spreading disinformation a core strategy for misleading people both inside Russia and beyond its borders. It’s hard to think of a more convincing sign your decisions aren’t working out than having to resort to outright fakes to defend them to your own people, not to mention the rest of the world.”
Outside of wars and conflict zones, however, deepfakes have been used to sow fear and confusion. A pro-Russian group released a fake documentary last year about the International Olympic Committee (IOC), using AI to spoof Tom Cruise’s voice. The group’s disinformation campaign has continued, cranking out three to eight faked videos a week in multiple languages, capitalizing on recent news events to cultivate a sense of impending violence around the Paris Olympic Games, The New York Times reported.
According to a summary from Clint Watts, general manager of Microsoft’s Threat Analysis Center, “On the ground, Russian actors may look to exploit the focus on stringent security by creating the illusions of protests or real-world provocations, thus undermining confidence in the IOC and French security forces. In-person staging of events—whether real or orchestrated—near or around Olympic venues could be used to manipulate public perceptions and generate a sense of fear and uncertainty.”
For instance, the group has produced deceptive videos pretending to be intelligence agencies warning potential attendees to stay away because of alleged terrorism threats or legitimate news outlets claiming that Parisians were buying extra property insurance before the Games to cover terrorism damage and that 24 percent of tickets for the Games had been returned due to terrorism fears.
Deepfakes and audio impersonations make these campaigns more compelling and shareable on social media, and debunking a widespread conspiracy theory is more challenging for officials seeking to promote a safe event.
Pornography and Gender-Based Victimization
An analysis of deepfake videos from identity theft services company Home Security Heroes found that deepfake pornography makes up 98 percent of all deepfake videos online. In those videos, 99 percent of the individuals targeted are women. Deepfake porn has been used against women as a form of blackmail, to wreck their careers, or as a form of assault, such as in revenge porn.
In late January 2024, pop mega-star Taylor Swift was depicted in multiple pornographic or violent deepfake images, which were circulated on social media platforms, amassing more than 27 million views in less than 24 hours. The content was removed from major platforms, but not everyone has the legal resources that Swift possesses to take immediate and decisive action.
According to a briefing from the Alliance for Universal Digital Rights (AUDR), “Deepfake image-based sexual abuse represents a growing and alarming form of tech-facilitated sexual exploitation and abuse that uses advanced artificial intelligence (AI) to create deceptive and non-consensual sexually explicit content. Vulnerable groups, particularly women and girls, face amplified risks and unique challenges in combatting deepfake image-based sexual abuse.”
Some of these videos and images are then used further against victims in digital sextortion schemes. In 2023, the FBI issued a public warning about malicious actors using manipulated photos or videos to extort victims for ransom or to comply with other demands, such as sending real sexually themed images or videos. Deepfake technology enables those perpetrators to create highly convincing content to pressure victims into complying.
Explicit deepfakes are also used as a threat to silence activists, politicians, and journalists online. In the wake of the Taylor Swift deepfakes, people threatened to make deepfakes of women who spoke out against the images online, Glamour reported.
“Cyberspace facilitates abuse because a perpetrator doesn’t have to be in close physical proximity to a victim,” said Amanda Manyame, digital rights advisor to Equality Now, in the Glamour article. “In addition, the anonymity provided by the Internet creates the perfect environment for perpetrators to cause harm while remaining anonymous and difficult to track down.”
Legal frameworks around tech-facilitated sexual exploitation are being built in multiple jurisdictions worldwide, but the inconsistency across borders makes protecting victims challenging, according to a briefing paper from the AUDR. Recent proposed legislation—including the AI Act in the European Union and the UK’s new Online Safety Act’s provision in the Sexual Offenses Act—could apply to deepfakes by creating transparency obligations for platforms.
Some U.S. states have laws in place providing for deepfakes, especially for incidents related to sexually explicit “altered depictions” and for deepfakes attempting to influence elections or political activities. In February 2024, the FTC proposed new protections extending impersonation fraud protections to individuals.
“The agency is taking this action in light of surging complaints around impersonation fraud, as well as public outcry about the harms caused to consumers and to impersonated individuals,” according to the FTC. “Emerging technology—including AI-generated deepfakes—threatens to turbocharge this scourge, and the FTC is committed to using all of its tools to detect, deter, and halt impersonation fraud.”
The FTC is debating if the revised tule should declare it unlawful for a firm—such as an AI platform—to provide goods or services that they know is being used to harm consumers through impersonation.
Claire Meyer is managing editor for Security Management. Connect with her on LinkedIn or via email at [email protected].