Skip to content
Menu
menu

Illustration by iStock

Human or AI? Detecting Loan Fraud After Fed Cuts

The U.S. Federal Reserve’s half-point interest rate cut in September 2024 has brought a new wave of fall loan activity. But borrowers taking advantage of lower rates are not the only ones seeking to benefit. Fraudsters, using artificial intelligence (AI) to disguise their true identities and intentions, are also baiting their hooks.

AI is making it more difficult to separate real people from fake ones. This poses significant risk for financial institutions when fraudsters apply for credit or loans that are subsequently funded.

So, how can banks tell who in this new wave of borrowers is human or AI?

The Scale of the Stolen and Synthetic Identity Problem

Synthetic fraud, according to Equifax, is a form of financial fraud in which a real person’s information is stolen and combined with falsified personal information to form a new identity. Sometimes this is also referred to as stolen identities fraud.

Synthetic identity theft can feature a legitimate figure, such as a first name, last name, Social Security number, and date of birth—incidentally, exactly the data leaked from the recent AT&T breach. This legitimate data is then combined or synthesized with a fraudulent physical address, email address, and phone number—personally identifiable information (PII) data that is changeable. Traditional fraud monitoring systems have difficulties detecting these sorts of half-truths.

Synthetic fraud attempts rose 184 percent in the past five years, making it one of the fastest-growing financial crimes in the United States, the TransUnion 2024 State of Omnichannel Fraud Report found. During 2022 to 2023, synthetic fraud attempts increased 21 percent—an increase that occurred at the same period in which generative AI had its breakout moment.

Financial services and fintech companies are struggling to control the threat. A Wakefield Research survey of 500 U.S. fraud and risk professionals revealed that half (50 percent) said their company’s synthetic fraud prevention is, at best, somewhat effective.

The survey also found that:

  • 87 percent of companies have extended credit to fake customers

  • 53 percent have proactively offered credit to fake customers

  • 20 percent estimate the average loss per incident is $50,000-$100,000

  • 23 percent estimate average loss at more than $100,000 per incident

How Synthetic Identities Mimic Real Customers

That fraudsters will increase their use of AI is inevitable. Taking the sophistication of their schemes to unprecedented levels promises to make matters worse. Fraudsters are already more inclined to nurture their accounts over a longer period for larger financial gain.

Common behaviors of synthetic accounts designed to mimic legitimate customers include:

  • Click on an acquisition online marketing ad to start the application process

  • Verifying an email address

  • Obtaining a credit score of 700 or higher

  • Verifying a valid phone number

  • Accessing their account online

  • Paying a utility bill using a bill pay facility

  • Checking their FICO score using the company's online banking portal

  • Maintaining an account balance over the minimum

  • Making online purchases or donations

These moves allow fraudsters to establish a seemingly normal pattern of financial activity, build credit and credibility over time, lie low for months, and then catch the big one: a very large transaction.

Current Verification Methods and Their Shortcomings

Traditional verification techniques are failing in several key areas, including keeping pace with fraudsters. Their abilities to evade synthetic fraud detection are improving faster than the security measures designed to detect them, according to more than half of those surveyed by Wakefield Research. This suggests that AI is helping create synthetic identities that can adapt to and overcome current detection methods—and worse, fine-tune an identity to make it look like a company’s next best customer.

Legacy passive-identity verification systems are technology that does not require any interaction from the consumer, essentially a behind the scenes confirmation that the consumer who is trying to open an account is, in fact, the real person. They are being fooled more frequently, so banks have adjusted their risk algorithms to try to detect what’s fooling them. While some incremental fraud capture is happening, the fraud is occurring at the expense of legitimate customers, who are being subjected to more friction.


Synthetic fraud attempts rose 184 percent in the past five years.


Existing passive-identity affirmation checks result in ambiguity. For example, breach data, such as that recently announced by AT&T, contains actual PII, Social Security numbers, first names, last names, and dates of birth, that could be mapped to aged and geo-located emails and legitimate phone numbers serviced by inexpensive phone plans to make the fraudster look very realistic.

Indeed, fraudsters “tune” identities to match the ideal customers a bank is soliciting via acquisition marketing campaigns. The fraudster’s threat vector will often start by clicking on the actual advertisement the bank is running to capture new customers.

To stop this, banks may conduct a manual review where a fraud analyst looks over the application and decides the trustworthiness of the potential customer. This takes time and incurs operational cost. Worse, a legitimate applicant may abandon their application due to the friction and apply elsewhere, leaving the bank with losses of customer acquisition costs and lifetime value.

Instead, financial services and fintech companies looking for a leg up on this synthetic scourge should be implementing multilayered verification approaches.

Strategies for Distinguishing Human from AI Borrowers

Advanced data analytics, machine learning, and yes, AI, can help financial institutions fight fire with fire, and process vast amounts of data for patterns and anomalies that humans might miss.

Machine learning algorithms can be used to analyze historical data to identify common characteristics of fakes. The key to solving identity fraud at scale, in an AI-driven fraud landscape, is a massive data asset that tracks a “day in the life” of an online consumer and sees those consumers with recency and frequency across a broad range of online activities.

Employing deep learning neural networks, entity embedding, and generalized classification, anomalies in the online activity of an identity can be surfaced. An identity that changed its online activity over time is a clear indication of fake humans, as real humans are predictable. 

Advanced data analytics can help correlate data across many sources to verify identity claims. As described above, the breadth of online activity and a top-down view of consumers provides a way to see if an identity is acting fraudulently. For example, if an identity makes a comment on a media website at exactly 7:15 a.m. PT every Tuesday, that is clearly fake as real humans are never that precise.

AI-driven systems can spot subtle inconsistencies in application data that might indicate fraud. Using graph database technology can surface both inter and intra identity activity anomalies.

For example, a single identity may look just like a bank’s next best customer, but a top-down view of online activities shows that that identity’s online activity is being orchestrated by AI together with 17 other identities. They do the same activity as a group and thus all 17 identities are fake.

Behavioral analysis and digital-activity pattern recognition can help identify suspicious people and activities.

Having access to data that tracks the online activity patterns of an applicant before they apply can provide incredible insight into the veracity of a potential new customer versus a fraudster.

Successful defenses then adapt and improve over time, learning from new patterns as they emerge. Their analysis can extend beyond individual accounts to look at networks of synthetic identities.

Combining advanced techniques, financial institutions can be better equipped to net the threat.

 

Ari Jacoby is the founder and CEO of Deduce. He is a successful serial entrepreneur and thought leader committed to democratizing access to critical fraud data after spending nearly two decades bridging the intersections of data, privacy, and security.

arrow_upward