AI-Powered Phishing Surge: Cyberattacks Spike 312% as Deepfakes and Spoofing Threaten Security in 2024
July 6, 2025
Effective cyber risk management strategies—such as multifactor authentication, strong passwords, regular account monitoring, and incident response plans—are essential to prevent and mitigate identity theft.
Hackers are using generative AI to produce convincing deepfakes—realistic audio and video—used for impersonation and scams, exemplified by a recent case in Hong Kong where HK$200 million was transferred fraudulently.
AI-powered deepfake technology enables the creation of highly realistic synthetic media, complicating detection and increasing the risk of impersonation and fraud.
AI-driven spoofing and business email compromise schemes are becoming more sophisticated, with attackers impersonating trusted entities through emails, websites, and SMS using AI-generated content.
Phishing remains a major threat, with cybercriminals leveraging AI to automate and enhance targeted attacks through counterfeit websites and social media intelligence.
AI-driven techniques like self-modifying malware, automated phishing, and social engineering are making cyberattacks more sophisticated, rapid, and strategic.
The 2024 report from the Identity Theft Resource Center revealed a staggering 312% increase in victim notices, soaring from 419 million in 2023 to over 1.7 billion, with the financial sector bearing the brunt of breaches.
The expanding connectivity through smartphones, wearables, and IoT devices has widened the attack surface, making device and account security more challenging and increasing vulnerability to identity theft.
Combating AI-enabled identity theft requires vigilance, source verification, antivirus and spoof detection software, data encryption, and the use of AI systems for real-time identity management.
Summary based on 1 source
Get a daily email with more Tech stories
Source

Forbes • Jul 6, 2025
Criminal Hackers Are Employing AI To Facilitate Identity Theft.