AI-augmented deepfakes are becoming more and more common in cyberattacks on businesses and government agencies, and most organizations are aware of the danger. However, there's a preparation paradox at work: most lag behind in investing in technical solutions for defending against deepfakes, experts say — even as they feel that they're ready for the onslaught.
On Oct. 7, AI giant OpenAI published research showing that a growing number of criminal and nation-state groups are using large language models (LLMs) to improve their attack workflows and create better phishing lures and malware. A second report, published by email security firm Ironscales on Oct. 9, found that these approaches seem to be working: Overall, the vast majority of midsized firms (85%) have seen attempts at deepfake and AI-voice fraud, and more than half (55%) suffered financial losses from such attacks, according to the survey-based report.
Most companies are taking the threat seriously, but are nonetheless struggling to keep up, say Eyal Benishti, CEO of Ironscales.
Attackers are using a variety of AI techniques to enhance their attack pipeline. Human digital twins can be trained on public information about a person to help create more realistic phishing attacks, which, combined with voice samples, could create convincing audio deepfakes. Concerns over misuse of AI caused Microsoft to mostly scuttle a voice cloning technology feature that it could have integrated into various apps, such as Teams, and allow a user — or an attacker — to hijack someone's voice for all kinds of fraud attempts.
AI-Generated Cyberattacks Proliferate
Attackers are already using such techniques, according to cybersecurity experts. The number of audio deepfakes encountered by businesses is on track to double in 2025, according to CrowdStrike's "2025 Threat Hunting Report." Currently, static deepfake images and AI-augmented business email compromise (BEC) attacks top the list of techniques encountered by businesses — with 59% of organizations encountering those techniques, according to the Ironscales report, which surveyed 500 US-based information-technology and cybersecurity professionals working at mid-sized companies with 1,000 to 10,000 employees.
While phishing used to cast a wide, generalized net for every fish at once, now it's about using the exact bait needed for each individual fish, says April Lenhard, principal product manager for cyber threat intelligence at cybersecurity firm Qualys.
"Much like deepfake photos that easily blur the line between real and fake, AI-crafted emails are now indistinguishable from an email a real boss or family member would send, which makes them much more dangerous," she says.
The average company financially impacted by deepfake attacks lost $167,000 in the past 12 months, while the average loss is $280,000. Source: Ironscales
Various types of deepfake audio and video impersonations are also increasingly prevalent, with more than 40% of companies encountering those techniques in the past year, according to the Ironscales survey.
Companies are trying to keep up on the cybersecurity awareness front, with 88% providing deepfake-related training in the past year, up from 68% in 2024. While almost every cybersecurity professional expresses confidence in their company's ability to defend against a deepfake attack — and nearly three-quarters say they're "very confident" — the majority of organizations have been unsuccessful in fending off such attacks and have suffered financial losses, with the average victim losing an estimated $167,000 (this is an adjusted number from Dark Reading: Ironscales found an average loss of $280,000 using the mean, which is skewed by outsized losses, including 5% of surveyed companies that suffered in excess of $1 million in losses).
Original Source: https://www.darkreading.com/cybersecurity-operations/deepfake-awareness-high-cyber-defenses-lag