Why AI- and cyber-enabled fraud are leading risks
AI-enabled fraud is outpacing ransomware in 2026. See why CISOs are shifting focus.
Apr 24, 2026 • 4 Minute Read
In 2026, worries about AI and fraud are on track to outpace ransomware in the C-suite.
That’s, at least, what 800 CISOs and other experts told the World Economic Forum (WEF) for its annual global cybersecurity outlook report. This year’s top concerns were AI vulnerabilities (87%), cyber-enabled fraud and phishing (77%), and supply chain-targeting tactics (65%). Ransomware plunged to fifth place (54%) after topping the chart in the prior year's survey.
The WEF isn’t the only organization noticing this trend, or that AI and fraud go together like ice cream and apple pie.
Recorded Future’s 2026 Cyber Threat Analysis argued that while cybercriminals have yet to develop “novel AI-driven tradecraft” at scale and their efforts at AI-driven malware have yet to be transformational, AI’s “breadth of use meaningfully expands what lower-skilled threat actors can achieve.” Michael Daniel, head of the Cyber Threat Alliance, warned that cyber scams and fraud will embrace AI to create “an increasingly measurable economic drag across many of the world’s economies” throughout 2026 and beyond.
Lowered barriers to fraud
According to the WEF, the survey results reflect a security industry that has become an integral part of operations and is wary of fraud risks through personal experience. It’s notable that AI’s rise to the top concern was followed by fraud and supply chain disruption, two arenas where even relatively simple AI capabilities, such as writing convincing emails or mass analysis of code repositories, have proven useful to cybercriminals.
“Criminal actors are exploiting genAI to automate and scale social engineering efforts, producing realistic phishing emails, deepfake audio and video, and falsified documentation capable of evading conventional detection systems and human scrutiny,” the authors wrote.
“As resilience strengthens, risk perception shifts towards emerging threats: Among CEOs of highly resilient organizations, AI-related vulnerabilities rise to the top,” they added.
For example, respondents who admitted insufficient resilience in their organizations ranked ransomware as the second-highest concern. Meanwhile, those respondents who are confident in their postures seem to be planning them more proactively.
Nearly three in four (73%) reported knowing someone who had been “personally affected by cyber-enabled fraud,” with the top types including:
Phishing/vishing/smishing (62%)
Invoice/payment fraud (37%)
Identity theft (32%)
Insider or employee fraud (20%)
Romance/impersonation scams (17%)
Investment/crypto fraud (17%)
“Cyber-enabled fraud has emerged as one of the most disruptive forces in the digital economy, undermining trust, distorting markets and directly affecting people’s lives,” WEF managing director Jeremy Jurgens told Infosecurity Magazine. Jurgens has referred to fraud as the “connective tissue of cyber risk,” as it impacts every level from individuals to companies and nations.
The National CIO Review’s David Eberly, for example, argued that WEF survey data showing that CEOs are more concerned about fraud and that CISOs are more concerned about ransomware could reflect a lack of alignment in cybersecurity priorities across roles. While CEOs appear to have gotten the message about cybersecurity as an overarching business risk, CISOs are directly responsible for not only managing and lowering that risk, but preventing and remediating operational disruptions.preventing operational disruptions and .
“The priorities of CISOs appear more influenced by system functionality and compliance readiness than financial exposure alone,” Eberly wrote.
AI security firm Trustmi’s 2025 social engineering report highlights one way a lack of team alignment might play out. Just 27% of respondents said finance and security teams split ownership of fraud prevention duties, while over a third (34.5%) said the lack of coordination had led or nearly led to fraud.
Fortunately, CISOs and CEOs appear to be working together more closely. 82% of CISOs reported to their CEO as of mid-2025, per data platform Splunk, up from 47% two years earlier.
Diminishing ransomware returns
Cybercrime is constantly evolving, and the latest data suggests attackers are re-evaluating their tactics.
While ransomware remains rampant, Google Threat Intelligence Group (GTIG) recently noted that virtually all top gangs have transitioned to extorting targets with stolen data rather than merely encrypting it. (Incidents involving solely data extortion without ransomware remain a minority, but grew from 2% in 2020 to over 15% in 2025, per GTIG.)
“We’re actually seeing a decrease in successful ransomware deployment,” dropping from 54% of incidents in 2024 to 36% last year, Bavi Sadayappan, a GTIG senior threat intelligence analyst, told Cyberscoop.
The financial picture tells a different but complementary story. Where GTIG tracks attack deployment and incident volume, FinCEN and Chainalysis measure money flows—and on that front, ransomware operators are losing ground. The Treasury Department’s Financial Crimes Enforcement Network (FinCEN) has reported plummeting payment rates from victims, and a Chainalysis report last year recorded an 8% decrease in total on-chain ransomware payments in 2025.
Notably, Chainalysis did observe a rise in total ransomware incidents over the same period—a finding that appears to contradict GTIG’s data but may instead reflect different counting methodologies. Both datasets agree on the direction of travel for ransomware economics: more attacks, lower yield per attack. That dynamic became a significant driver of fragmentation in the Ransomware-as-a-Service (RaaS) market, as affiliates increasingly sought reliable returns elsewhere.
The looming identity crisis
Ransomware is loud, aggressive, and causes spectacular disruption to operations. It also requires broad access and the execution of exploit chains. Cyber-enabled fraud, though, often strikes when the target’s systems are working perfectly and relies on those systems correctly executing simple instructions from compromised or malicious users. No one gets paid if the invoices don’t go through.
Much of the worry about both AI and fraud seems to center on one particular problem: identity and access management (IAM). Verifying user authorization and intent will become even more of a friction point for security teams, especially as AI agents with potentially unforeseeable capabilities enter the enterprise arena.
Zscaler researchers warn that organizations have failed to prioritize secure ways to authenticate workloads, even before the current AI rush. Security experts estimate that around 95% of enterprises using or testing autonomous agents have failed to secure their identities with existing technologies such as public key infrastructure, according to CSO Online. Introducing these agents to enterprises, especially when they perform tasks on behalf of humans, could exponentially increase the surface area for fraud.
FinTech Global noted that risks are rising for everyone, not just enterprises, with the Federal Trade Commission reporting significant annual growth in consumer fraud (up 25% to $12.5 billion in 2024, with 2025 numbers still not yet available).
“Recent developments in generative AI are lowering the barriers to executing all kinds of attacks, while at the same time increasing their sophistication and making them appear more credible,” Planet VPN co-founder Konstantin Levinzon told the site.
Get the edge on emerging threats. Join thousands of security leaders who read Exploit to stay ahead of AI-driven fraud and the next wave of cyber risks. Subscribe here.
Advance your tech skills today
Access courses on AI, cloud, data, security, and more—all led by industry experts.