AI is lowering barriers for cybercriminals while defenses race to catch up
Generative AI tools are enabling attackers to automate phishing, malware tweaking, and vulnerability discovery at scale, but defensive AI systems are also improving rapidly.
1 source · cross-referenced
- Criminals are using generative AI to produce phishing emails, deepfakes, and malicious code at scale, with scam centers in Southeast Asia and state-level actors leveraging these tools.
- Anthropic's Mythos vulnerability-discovery model identified thousands of critical vulnerabilities in major operating systems and browsers, prompting delayed release and formation of Project Glasswing for defensive research.
- Microsoft processed 100 trillion suspicious signals daily and blocked $4 billion in fraudulent transactions between April 2024 and April 2025, many aided by AI-generated content.
- Security researchers remain optimistic that basic defenses like software updates can thwart current attacks, but the trajectory of more sophisticated future threats remains uncertain.
Since ChatGPT's public launch in late 2022, cybercriminals have rapidly adopted large language models to automate and enhance attack operations. Generative AI now enables attackers to compose convincing phishing emails, create realistic deepfakes, modify malware for evasion, search for network vulnerabilities, and rapidly generate ransom notes at scale. Scam operations across Southeast Asia have embraced inexpensive AI tools to target more victims faster and relocate quickly, according to Interpol warnings. The United Arab Emirates has reported thwarting AI-backed attacks on critical infrastructure.
The effectiveness of these attacks does not depend on sophistication. Large-scale, lower-quality attacks succeed through volume—a single malicious email reaching an undefended system or an unsuspecting user at the right moment can result in successful compromise. This volume advantage forces defenders to manage exponentially larger attack surfaces.
The dual-use nature of AI vulnerability research complicates the security picture. Anthropic's Mythos model, currently in testing, identified critical vulnerabilities in every major operating system and web browser. Anthropic delayed Mythos's public release and established Project Glasswing, a consortium of tech companies aimed at applying these discovery capabilities to defense. This decision highlights the tension between offensive capability advancement and controlled deployment.
Defensive AI systems are simultaneously improving. Microsoft processes over 100 trillion signals daily flagged by its AI systems as potentially malicious, and the company reports blocking $4 billion in fraudulent transactions between April 2024 and April 2025, many involving AI-assisted content. Cybersecurity researchers believe current basic defenses—software updates and network security protocols—remain effective against existing attacks, but consensus erodes regarding preparedness for more advanced future threats.
- Apr 24, 2026 · TechCrunch — AI
Delve's security certifications failed to prevent breaches at multiple customers
Trust57 - Apr 21, 2026 · TechCrunch
Clarifai deletes 3 million OkCupid photos following FTC settlement over unauthorized facial recognition training
Trust65 - Apr 20, 2026 · The Verge
Vercel development platform compromised via third-party AI tool vulnerability
Trust71