Attention: You are using an outdated browser, device or you do not have the latest version of JavaScript downloaded and so this website may not work as expected. Please download the latest software or switch device to avoid further issues.
| 27 Apr 2026 | |
| Written by Gabi Gerber | |
| Attacks & Threats |
Memory files can help artificial intelligence (AI) perform better, but researchers have found they are also a persistent trouble spot.
AI memory files and context data help personalize requests and provide additional information that large language (LLMs) and other foundational AI models can use to deliver the best responses. But a persistent issue is proving to be a fundamental weakness in the security of AI systems. More here
AI's danger isn't that it's creating new bugs, it's that it's amplifying old ones. More...
The prompt-injection vulnerability in the agentic AI product for filesystem operations was a sanitization issue that all… More...
Chained Bypasses Exfiltrate Data Via Hidden AI Prompts More...
Attackers can abuse the near-maximum severity flaw in nginx-ui to restart, create, modify, and delete NGINX configuratio… More...
Cisco found and fixed a significant vulnerability in the way Anthropic handles memories, but experts warn that mishandled memory files will continue to threaten AI systems. More...
AI's danger isn't that it's creating new bugs, it's that it's amplifying old ones. More...
The prompt-injection vulnerability in the agentic AI product for filesystem operations was a sanitization issue that all… More...
Chained Bypasses Exfiltrate Data Via Hidden AI Prompts More...
Attackers can abuse the near-maximum severity flaw in nginx-ui to restart, create, modify, and delete NGINX configuratio… More...