The uber-popular workplace tool Notion rolled out an ambitious AI upgrade with its recent release of Notion 3.0. The software offers a set of AI agents to work alongside you, the user. Sounds great (and like something from the future) – and it also opens the door to some rather nasty security issues.
A security researcher successfully got the Notion LLM agent to access private (and confidential) data and then shared this data with an external website – all through some clever, but ultimately not complicated, prompt injection.
This research exposes a fundamental security gap in AI agent architectures where traditional access controls become ineffective once agents gain autonomous tool usage capabilities. The combination of broad permissions, tool access, and susceptibility to prompt injection creates a “perfect storm” for data exfiltration attacks. Fun times. Maybe think twice before letting AI agents run wild with your data.
↗ The Hidden Risk in Notion 3.0 AI Agents: Web Search Tool Abuse for Data Exfiltration