Skip to content
Menu
menu

Illustration by Security Technology; iStock

Why DNA Data is the Next Target for Identity Theft Fraudsters

Identity theft is getting more personal. Instead of just stealing people’s Social Security numbers and other personally identifiable information (PII), today’s fraudsters and hackers are stealing people’s genetic material. That’s right: DNA is becoming a new weapon in the identity fraud arms race.

In December 2023, the popular genetic testing company 23andMe announced that hackers stole ancestry profile data from 6.9 million of its users. Approximately 5.5 million of these hacked profiles included data about the customers’ percentage of DNA shared with matches.

The full ramifications of the 23andMe data breach are still being examined, but this incident raises disturbing questions about the most intimate levels of data security and online privacy. What would happen if a bad actor had your entire genetic code?

Let’s look at a few big reasons why DNA data is becoming the next target in identity theft—and how your organization can prepare to stay ahead of emerging threats to your customers.

Synthetic Fraud is Growing Fast—and Getting Smarter

Synthetic fraud is one of the fastest-growing financial crimes, and it’s a whole new level of security threat.

In traditional identity theft, fraudsters steal real people’s personal information to open fraudulent credit card accounts or access that individual’s existing accounts fraudulently. Synthetic fraud happens when criminals create fake identities combined with real people’s data (such as their actual Social Security number, age, date of birth, and first and last names but with a new physical address, email address, and phone number). So, a real person with fake contact details. 

Traditional identity theft was often like a quick smash-and-grab robbery. Fraudsters would open a credit card account and ring up thousands of dollars of purchases before being detected. Conversely, synthetic fraud is a long con, where fraudsters use impersonation tactics to persuade businesses or individual people to send them money.


What would happen if a bad actor had your entire genetic code?


This is not a future problem or a worst-case scenario for technology that doesn’t yet exist. Synthetic fraud is already happening. A recent survey from Wakefield commissioned by my company found that 76 percent of fraud and risk professionals believe that their organization already has customers who are synthetic fraudsters; 87 percent of financial services organizations have extended credit to fake accounts.

Financial institutions already must comply with Know Your Customer requirements; their next challenge will be Knowing if Your Customer is Real or Not.

AI: Supercharging Synthetic Fraud

Every new era of technology brings new weapons for fraudsters, and generative AI is no exception. But the rapid speed of adoption and development of generative AI is creating special challenges for security teams.

AI is already allowing the convincing creation and unauthorized distribution of personal details and characteristics. The most sophisticated fraudsters are using AI to create what I call SuperSynthetic identities.

Here are a few examples of how generative AI technology is being used by fraudsters:

  • Create simulated (but convincing) online activity

  • Set up credible email addresses

  • Create accounts with usernames and passwords

  • Generate realistic-looking documents

  • Make deepfake voice recordings

SuperSynthetic identity fraudsters are more patient, cunning, and detail-oriented than traditional criminals. The SuperSynthetic fraud criminals will often go so far as to sign up for a cheap mobile plan (making it easier to get around two-factor authentication), buy low-cost products, pay bills online, sign up for a debit card and make low-dollar payments, and otherwise create a realistic pattern of digital financial activity that makes them look like a real person.

There are even criminal fraud as a service organizations operating on the Dark Web, selling verified accounts at banks or betting sites. What happens when DNA data gets added to this mix?

Why Real DNA Could Create a New Wave of Fake Customers

Security and privacy teams are already battling a raging firestorm of synthetic fraud. Adding DNA data to AI-generated identities could throw gasoline onto the fire and create unprecedented challenges for security technology teams.

No one knows yet exactly how harmful the recent 23andMe data leak will be, or how criminals might use this ancestry data. In an ominous sign, leaked data from 23andMe appeared to focus on people with Chinese and Ashkenazi Jewish ancestry. If additional ancestry or DNA data is leaked in the future, could this information be used for targeted cyber-harassment, hate crimes against vulnerable groups, blackmail campaigns, or fake child support claims?


Ferreting out the fake customers can help your organization protect your real customers.


Leaking DNA data could also supercharge synthetic identity fraud by making it easier for criminals to impersonate relatives or set up fake identities to open financial accounts. One popular scam happening right now is the rise of deepfake AI-generated voicemails that sound like they’re from a relative who’s in distress and asking for money. What if these fraudsters had access to a broader range of family information?

We could see a wave of hard-to-detect scams where criminals are impersonating your long-lost cousin (based on accurate ancestry data). Instead of using “your mother’s maiden name” as a security question for new accounts, the rise of synthetic fraud combined with ancestry data could give hackers access to your entire family tree.

AI is already allowing the convincing creation and unauthorized distribution of personal details and characteristics. If DNA data gets combined with these capabilities, synthetic fraud is going to reach daunting new levels of complexity and uncanny realness.

Security professionals need a new level of vigilance and adaptability, and more than ever, the security world needs proactive, creative thinking about how to counteract these powerful trends in fraud. Ferreting out the fake customers can help your organization protect your real customers—and defend your brand reputation.

 

Ari Jacoby is the founder and CEO of Deduce. He is a successful serial entrepreneur and thought leader on a mission to democratize access to critical fraud data after spending nearly two decades bridging the intersections of data, privacy, and security. Prior to founding Deduce, Jacoby led companies, including Solve Media/Circulate (acquired by LiveRamp) and Voicestar (acquired by Marchex), to successful exits. He is now dedicated to protecting businesses and their consumers from identity fraud threats while simultaneously creating more secure, frictionless experiences. Jacoby attended Georgetown University, where he received a BA in government and economics. Follow Jacoby on LinkedIn or Twitter at @arijacoby.

© Ari Jacoby

arrow_upward