Google Disrupts First Known AI-Generated Zero-Day Exploit — Criminals Used LLM to Bypass 2FA at Scale
Summary
Google disclosed on Monday that its Threat Intelligence Group (GTIG) identified and disrupted a criminal operation that used an AI model to develop a working zero-day exploit — marking the first confirmed instance of AI being weaponized for vulnerability discovery and exploit generation in the wild.
The exploit, delivered as a Python script, targeted a popular open-source web-based system administration tool and was designed to bypass two-factor authentication (2FA). GTIG assessed with high confidence that an LLM was used to generate the exploit, citing telltale signs including abundant educational docstrings, a hallucinated CVSS score, and the clean, textbook-style Pythonic formatting characteristic of LLM-generated code.
Google worked with the affected vendor to patch the vulnerability before the planned mass exploitation campaign could be executed. The company did not disclose the name of the targeted tool. Notably, there is no evidence that Google’s own Gemini was used — the attackers appear to have used a different AI model.
Source
📰 The Hacker News · Google GTIG Blog · SiliconAngle
Commentary
This is a watershed moment. The cybersecurity community has long warned that AI would compress the timeline between vulnerability discovery and exploitation — and now we have proof. The fact that a criminal group, not a nation-state, pulled this off first is particularly alarming. If mid-tier cybercriminals can prompt an LLM into generating a working zero-day bypass, the barrier to entry for sophisticated attacks just dropped dramatically.
The silver lining is that Google caught it before mass exploitation. But the underlying vulnerability — a semantic logic flaw based on a hard-coded trust assumption — is exactly the type of subtle bug that LLMs are unusually good at spotting. Defenders need to start thinking about AI-assisted fuzzing and code review not as a future concern, but as today’s baseline requirement.


