The rise of AI-assisted identity fraud is radically transforming cybersecurity, as revealed in the 2025 Identity Fraud Report by the Entrust Cybersecurity Institute and Onfido. The report outlines a dramatic increase in sophisticated cyber attacks, leveraging artificial intelligence, and indicates that deepfake incidents have reached alarming rates, now occurring every five minutes. The data presents a staggering 244% increase in digital document forgery within a year, shifting the focus from traditional physical counterfeiting to the more advanced manipulation of digital credentials. Currently, digital forgeries account for 57% of all document fraud cases, underlining the evolving tactics of fraudsters who are increasingly utilizing generative AI tools and “as-a-service” platforms to launch identity injection attacks at scale.
Deepfake technology has particularly caught the attention of organizations globally, as criminals exploit tools, such as realistic face-swapping applications, to carry out biometric fraud at an unprecedented scale. The report notes that deepfakes now represent 40% of all biometric fraud cases, with these technologies enabling various illicit activities, including fraudulent account openings, account takeovers, phishing scams, and misinformation campaigns. This fusion of AI and fraud presents a clear shift in the global fraud landscape, which Simon Horswell, a senior fraud specialist at Entrust, emphasizes must be acknowledged by all business leaders. He warns that these sophisticated attacks threaten every sector and individual, compelling security teams to adapt their strategies proactively to remain secure in an increasingly treacherous digital environment.
The financial services sector has emerged as the primary target for these AI-driven fraud attempts. Cryptocurrency platforms, in particular, are facing significant threats, with crypto-related fraud surging by 50% year-over-year, constituting 9.5% of all fraud incidents in 2024. This increase can be largely attributed to the enticing valuations of cryptocurrencies, drawing in both investors and fraudsters alike. Following closely are traditional banking sectors, including lending and mortgages, which are also witnessing heightened fraudulent onboarding attempts, spurred by inflationary pressures that lead to increased scams targeting vulnerable consumers. As the financial landscape continues to grapple with these challenges, deeper insights provided by a 2024 Deloitte survey predict that incidents of deepfake financial fraud will further escalate in the coming year, corroborating concerns raised by Onfido regarding the evolution of fraudulent techniques.
The Entrust report advocates for digital identity verification as a critical component in combating financial crime, particularly in onboarding processes. Establishing trust right from the first point of contact is deemed essential for organizations to effectively thwart fraudulent activities associated with identity theft and manipulation. The emphasis is placed not merely on responding to scams but on instituting robust verification processes that can ensure security and trustworthiness. This proactive approach is crucial in a climate where the line between authentic and bogus identities is becoming increasingly blurred due to the capabilities of AI-driven technologies.
In summation, the report highlights a pressing need for organizations to prioritize the development and implementation of sophisticated security measures in the financial sector and beyond. As fraudsters continuously refine their techniques, employing cutting-edge technologies like deepfakes and document forgery tools, businesses must adopt a forward-thinking stance to combat these threats effectively. Expertise from industry leaders such as Simon Horswell underscores that the responsibility to adapt lies firmly with organizations that must not only respond to changing threats but also anticipate future risks associated with AI in fraud.
Industry experts and cybersecurity leaders urge for an ongoing commitment to security innovation, suggesting that regular assessments of existing systems, along with training for employees on spotting fraud, can bolster the organization’s defenses against these nefarious tactics. This holistic approach towards understanding and countering AI-assisted fraud forms the bedrock of a more resilient cyber landscape that can successfully mitigate the growing risks presented by evolving technology. The collaboration between organizations, regulatory bodies, and technology providers will be vital in forging secure pathways for identity verification and countering the global fraud epidemic driven by AI advancements.