“Comment and Control” — New Prompt Injection Attack Hijacks Claude Code, Gemini CLI, and GitHub Copilot Agents

Summary

Security researchers have disclosed a new prompt injection attack method called “Comment and Control” that can hijack AI coding agents through untrusted GitHub data — PR titles, issue comments, and issue bodies. The attack has been confirmed to work against Anthropic’s Claude Code Security Review, Google’s Gemini CLI Action, and GitHub Copilot Agent.

Disclosed by security engineer Aonan Guan and researchers from Johns Hopkins University, the attack exploits how AI agents process untrusted GitHub data to execute arbitrary commands and exfiltrate credentials. Against Claude Code, attackers craft malicious PR titles to trick the AI into revealing credentials in GitHub Actions logs. Against Gemini CLI, injected payloads in issue comments bypass guardrails to extract full API keys. Against Copilot, HTML comments hide payloads that bypass environment filtering and scan for secrets.

All three vendors — Anthropic, Google, and GitHub — have confirmed the findings and awarded bug bounties. Anthropic classified the issue as “critical,” while GitHub described it as a “known architectural limitation.” However, no CVEs have been assigned and no public advisories have been published, potentially leaving users with older deployments exposed.

Source

SecurityWeek · The Register · CyberNews

Commentary

This is the kind of vulnerability that should make every engineering team pause before plugging AI agents into their CI/CD pipelines. The attack surface is deceptively simple: AI agents read untrusted text (PR titles, issue comments) and treat it as instruction. It is prompt injection 101, but in a context where the agent has access to secrets, execution environments, and network resources.

GitHub calling this a “known architectural limitation” is technically honest but practically alarming. It means there is no clean fix — the fundamental design of LLM-based agents that ingest untrusted input and have execution privileges is inherently vulnerable. The researchers explicitly warn this pattern extends beyond GitHub to Slack bots, Jira agents, email agents, and deployment automation. If your AI agents touch untrusted data and have access to secrets, you have this problem. The question is whether you know it yet.

You May Have Missed