Skip to content

More than a third of companies admit to employing AI to combat fraud, according to Experian's 10th annual survey.

In a recent announcement, Experian®, a prominent data and technology organization, unveiled their 2025 U.S. Identity and Fraud Report. The report indicates that approximately one-third of businesses are integrating AI, including generative AI, to combat fraud. As fraudulent activities become...

Third of Businesses Utilize AI to Combat Fraud According to Experian's Yearly Survey
Third of Businesses Utilize AI to Combat Fraud According to Experian's Yearly Survey

More than a third of companies admit to employing AI to combat fraud, according to Experian's 10th annual survey.

In a world where digital transactions are increasingly common, concerns about fraud are on the rise. According to Experian's 2025 U.S. Identity and Fraud Report, 90% of businesses express worry about fraud, with identity theft, transactional payment fraud, account takeover, peer-to-peer payment scams, and first-party fraud being the top events experienced last year.

The report also highlights growing concern over AI-generated fraud and deepfakes. Despite this, businesses are still relying on traditional verification methods like passwords and PINs, while more secure methods like biometrics and behavioral analytics remain underused.

However, the need for organizations to invest in innovative fraud prevention methods that meet consumers' expectations is clear. By 2026, 72% of business leaders expect AI-generated fraud and deepfakes to be major challenges. To address this, companies are increasingly investing in advanced AI-driven methods.

These methods include custom AI models trained on organizational data, real-time behavioral biometrics, continuous retraining of detection algorithms, and federated learning to maintain evolving defenses. Key innovative techniques include multi-factor authentication enhanced with behavioral biometrics, cryptographic device authentication, and verification protocols resistant to synthetic media manipulation.

Experian's report reveals that over a third of companies are already using AI—including generative AI—to fight fraud, accelerating investments in AI models and advanced analytics to improve decision-making and tackle emerging fraud threats. While trust in AI tools remains low among consumers, firms are combining data, analytics, and technology innovation to create proactive fraud prevention solutions.

Regulatory frameworks like the EU’s AI Act complement technical innovations by introducing transparency, mandatory watermarking, and thorough documentation of AI systems generating synthetic content. This legal oversight enhances traceability and accountability for misuse of AI tools in fraudulent schemes, supporting stronger defenses against AI-generated fraud and deepfakes.

Financial institutions, in particular, use multi-layered strategies combining technology, policy, and human oversight to build systemic resilience. Behavioral biometrics analyze real-time user interactions such as typing and navigation patterns, creating inter-bank intelligence sharing networks to detect anomalies indicative of advanced fraud attempts. Cryptographic authentication and mandatory verification delays for high-value transactions help prevent manipulation by deepfakes.

Platforms like NiCE Actimize apply machine learning and collective intelligence derived from extensive data networks to detect and block scams and account takeover fraud, the most prevalent forms of fraud which are increasingly intertwined with AI-generated tactics.

As consumers grow increasingly concerned about doing things online, with half wanting stronger online safeguards, businesses must prioritize their security. With fraud losses reaching a record $12.5 billion in 2024, a 25% increase from the previous year (FTC), it's clear that businesses must take action to protect their customers and maintain trust.

By leveraging a combination of bespoke AI fraud detection models, continuous adaptive learning, behavioral analytics, cryptographic verification, transparency regulations, and collaborative intelligence networks, businesses can stay ahead of AI-fueled fraud and deepfake threats.

[1] Experian. (2025). 2025 U.S. Identity and Fraud Report. [2] European Commission. (2021). Proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (AI Act). [3] NiCE Actimize. (n.d.). Fraud and Risk Management Solutions. [4] Financial Conduct Authority. (2021). Operational Resilience: Impact Tolerance for Operational Disruptions. [5] National Institute of Standards and Technology. (2020). NIST Special Publication 800-162: AI Risk Management Framework.

  1. In the realm of business, where digital transactions are prevalent, the concern about fraud, particularly AI-generated fraud and deepfakes, is significant, as evidenced by Experian's 2025 U.S. Identity and Fraud Report.
  2. As AI-generated fraud and deepfakes are anticipated to be major challenges by 72% of business leaders by 2026, there is a growing need for innovative fraud prevention methods that incorporate advanced AI-driven techniques.
  3. To combat AI-fueled fraud and deepfake threats, businesses are turning to a mix of bespoke AI fraud detection models, continuous adaptive learning, behavioral analytics, cryptographic verification, transparency regulations, and collaborative intelligence networks.

Read also:

    Latest