Synthetic identity fraud occurs when criminals combine real and fabricated information to create entirely new, fictitious identities. Unlike traditional identity theft, where criminals steal and use an existing person's complete identity, synthetic fraud stitches together fragments, a real Social Security number paired with a fake name, a fabricated address combined with legitimate employment history, to build personas that don't correspond to any actual individual.
The scale of the problem
Synthetic identity fraud has become one of the fastest-growing forms of financial crime. According to Federal Reserve-affiliated analysis, synthetic identity fraud accounted for $35 billion in losses in 2023 alone. This figure reflects the increasing sophistication of fraud operations and the difficulty financial institutions face in detecting identities that are partially legitimate.
The crime is particularly insidious because it often goes undetected for extended periods. A synthetic identity might build credit history over months or years, making small purchases, paying bills on time, and appearing to be a legitimate customer. When the fraudster finally "busts out," maxing out credit lines and disappearing, the financial institution is left holding the loss with no real person to pursue.
How synthetic identities are created
Fraudsters construct synthetic identities through several methods. The most common approach combines a real Social Security number (often belonging to a child, elderly person, or recent immigrant who rarely uses credit) with fabricated personal details. Because credit bureaus create new files when they encounter unfamiliar combinations of SSN and name, the synthetic identity begins building its own credit history.
Other elements may include genuine addresses obtained from data breaches, fabricated employment histories, and legitimate-seeming phone numbers and email addresses. The resulting identity passes many verification checks because individual components are real, only the combination is fraudulent.
The role of generative AI
Generative AI has fundamentally changed the synthetic identity landscape. What once required skilled fraudsters to spend hours assembling convincing fake personas can now be automated in seconds. AI models can generate realistic faces, voices, utility bills, bank statements, and employment records that pass visual inspection.
The Financial Crimes Enforcement Network (FinCEN) has warned that criminals are increasingly using generative AI to create deepfake videos, synthetic documents, and realistic audio to bypass identity checks and exploit financial systems on a large scale. Traditional verification methods, such as uploading a selfie, scanning a document, or providing a utility bill, are collapsing under this pressure because AI can convincingly fabricate all of these elements.
Why traditional verification fails
Synthetic identities are challenging to detect because they exploit weaknesses in the traditional methods of identity verification. Systems that verify whether individual data elements are valid (Is this SSN real? Does this address exist?) may confirm each piece, but overlook that the combination is fraudulent.
Document-based verification is particularly vulnerable. A synthetic identity backed by AI-generated documents can pass checks that simply analyze whether a document appears legitimate. The industry consensus is clear: verification methods that rely on examining images or documents are increasingly ineffective against sophisticated synthetic fraud.
The path forward
A key defense is strengthening identity proofing and verification using higher-assurance signals, potentially including cryptographically verifiable digital credentials, alongside fraud analytics, device and behavior signals, authoritative data checks, and controls that address account and payment fraud over time.
Verifiable digital credentials address synthetic fraud at its root. When a credential is cryptographically signed by the issuing authority and bound to the holder's device, there's no way to stitch together fragments from different sources. Either the credential was issued to you legitimately, or it wasn't. No amount of AI sophistication can forge a valid cryptographic signature from a state DMV.
This is why synthetic identity fraud isn't just a security problem, it's a catalyst for digital identity modernization. The same technology that protects against deepfakes and fabricated documents also enables selective disclosure, privacy preservation, and user control. Decentralized identity isn't just a better model; against synthetic fraud, it's becoming the only defensible one.

Want to keep learning?
Subscribe to our blog.


