5 things your AI agent should never leak (and how to detect them)
AI agents handle sensitive data all the time. Most have zero controls on what gets passed to external APIs. Here are the 5 most common leaks: 1. PII in tool call arguments Your agent sends a custom...

Source: DEV Community
AI agents handle sensitive data all the time. Most have zero controls on what gets passed to external APIs. Here are the 5 most common leaks: 1. PII in tool call arguments Your agent sends a customer name and email to an LLM for summarization. That PII just left your system. Most teams do not even know it happened. 2. API keys in agent context Agent reads a config file, picks up a database password, passes it as context to the next tool call. Now your secrets are in an LLM provider log. 3. Prompt injection via tool outputs A tool returns data containing hidden instructions. The agent follows them and exfiltrates data through a subsequent tool call. 4. Financial data in reasoning chains The agent reasons about revenue numbers, customer counts, pricing. All of this ends up in trace logs that may not be access-controlled. 5. Medical or legal information Healthcare and legal agents handle privileged information. Without scanning, this data flows freely between tools. Detection asqav scans