Attention: You are using an outdated browser, device or you do not have the latest version of JavaScript downloaded and so this website may not work as expected. Please download the latest software or switch device to avoid further issues.
| 14 May 2026 | |
| Written by Gabi Gerber | |
| Attacks & Threats |
Developers using the latest versions of AI coding tools like Claude Code, Cursor CLI, Gemini CLI, and CoPilot CLI could inadvertently execute malicious code on their systems with a single keypress, or no keypress at all in continuous integration environments.
That, according to researchers at Adversa AI, is because none adequately warn users of how a malicious repo can auto-approve and spawn a Model Context Protocol (MCP) server without their explicit approval or knowledge. All four coding tools show some form of a trust dialog prompting the user to indicate whether they trust a particular repo, but they do not offer full details on what that consent might actually entail.
Adversa AI identified Claude Code as offering the least information in its trust dialog, and Gemini AI as offering the most, along with a choice in terms of allowing or disallowing an MCP server to execute on the developer's system. But the exposure is the same in all four, according to Adversa's lead researcher, Rony Utevsky. More here
Cyber adversaries have long used AI, but now attackers are using large language models to develop exploits and orchestrate complex attacks. More...
A proof-of-concept exploit (PoC) shows how someone with admin privileges can exploit the issue to steal passwords, and t… More...
The issue isn't artificial intelligence, but rather an industry adding AI agent integrations into production environment… More...
Cisco found and fixed a significant vulnerability in the way Anthropic handles memories, but experts warn that mishandle… More...
AI's danger isn't that it's creating new bugs, it's that it's amplifying old ones. More...
Malicious repositories can trigger code execution in Claude Code, Cursor CLI, Gemini CLI, and CoPilot CLI with minimal or no user interaction, thanks to skimpy warning dialogs. More...
Cyber adversaries have long used AI, but now attackers are using large language models to develop exploits and orchestra… More...
A proof-of-concept exploit (PoC) shows how someone with admin privileges can exploit the issue to steal passwords, and t… More...
The issue isn't artificial intelligence, but rather an industry adding AI agent integrations into production environment… More...
Cisco found and fixed a significant vulnerability in the way Anthropic handles memories, but experts warn that mishandle… More...