Agentic AI Sparks New Era of Self-Directed Cyber Attacks, Experts Warn of AI Arms Race
August 26, 2025
Agentic AI is being deployed by threat actors to run autonomous, multi-step social engineering attacks without continuous human oversight, signaling a shift toward highly capable, self-directed attacks.
Industry analyses from McAfee and others warn of AI-vs-AI escalation, underscoring the need for proactive awareness and robust controls to stay ahead.
Defense guidance advocates a hybrid approach that combines AI-powered security with human vigilance, emphasizing behavioral anomaly detection, independent verification channels, and pattern-based checks for AI-generated content.
Agentic AI powers multi-stage campaigns across email, voice, and social platforms with messages tailored from real-time feedback, pushing click-through rates to about 54% versus 12% for manual phishing.
Threats hinge on dynamic code generation to evade antivirus by constant mutation and highly personalized attacks drawn from social media, breaches, and misconfigured APIs.
Deepfakes and phishing have evolved from reactive tools into autonomous agents that monitor targets, learn, and adapt in real time.
Projections anticipate that by 2028 roughly one-third of AI interactions could involve autonomous agents in cybercrime, enabling broader and more efficient campaigns.
Summary based on 1 source
Get a daily email with more AI stories
Source

GBHackers Security | #1 Globally Trusted Cyber Security News Platform • Aug 26, 2025
Threat Actors Leverage AI Agents to Conduct Social Engineering Attacks