Skip to content
Safety · Apr 22, 2026

AI is lowering barriers for cybercriminals while defenses race to catch up

Generative AI tools are enabling attackers to automate phishing, malware tweaking, and vulnerability discovery at scale, but defensive AI systems are also improving rapidly.

Trust52
HypeSome hype

1 source · cross-referenced

ShareXLinkedInEmail
TL;DR
  • Criminals are using generative AI to produce phishing emails, deepfakes, and malicious code at scale, with scam centers in Southeast Asia and state-level actors leveraging these tools.
  • Anthropic's Mythos vulnerability-discovery model identified thousands of critical vulnerabilities in major operating systems and browsers, prompting delayed release and formation of Project Glasswing for defensive research.
  • Microsoft processed 100 trillion suspicious signals daily and blocked $4 billion in fraudulent transactions between April 2024 and April 2025, many aided by AI-generated content.
  • Security researchers remain optimistic that basic defenses like software updates can thwart current attacks, but the trajectory of more sophisticated future threats remains uncertain.

Since ChatGPT's public launch in late 2022, cybercriminals have rapidly adopted large language models to automate and enhance attack operations. Generative AI now enables attackers to compose convincing phishing emails, create realistic deepfakes, modify malware for evasion, search for network vulnerabilities, and rapidly generate ransom notes at scale. Scam operations across Southeast Asia have embraced inexpensive AI tools to target more victims faster and relocate quickly, according to Interpol warnings. The United Arab Emirates has reported thwarting AI-backed attacks on critical infrastructure.

The effectiveness of these attacks does not depend on sophistication. Large-scale, lower-quality attacks succeed through volume—a single malicious email reaching an undefended system or an unsuspecting user at the right moment can result in successful compromise. This volume advantage forces defenders to manage exponentially larger attack surfaces.

The dual-use nature of AI vulnerability research complicates the security picture. Anthropic's Mythos model, currently in testing, identified critical vulnerabilities in every major operating system and web browser. Anthropic delayed Mythos's public release and established Project Glasswing, a consortium of tech companies aimed at applying these discovery capabilities to defense. This decision highlights the tension between offensive capability advancement and controlled deployment.

Defensive AI systems are simultaneously improving. Microsoft processes over 100 trillion signals daily flagged by its AI systems as potentially malicious, and the company reports blocking $4 billion in fraudulent transactions between April 2024 and April 2025, many involving AI-assisted content. Cybersecurity researchers believe current basic defenses—software updates and network security protocols—remain effective against existing attacks, but consensus erodes regarding preparedness for more advanced future threats.

Sources
  1. 01MIT Technology Review — AISupercharged scams
Also on Safety

Stories may contain errors. Dispatch is assembled with AI assistance and curated by human editors; despite the trust-score filter, mistakes happen. We correct publicly — every article links to its revision history. Nothing here is financial, legal, or medical advice. Verify before relying on any claim.

© 2026 Dispatch. No ads. No sponsorships. No paid placement. Reader-supported via Ko-fi.

Built by a person who cares about honest AI news.