12.8 million secrets leaked to LLMs last year. Here's the pattern.
We scanned every prompt, trace, and memory object our customers opted into analysis for in 2025. The dataset is 31 petabytes. The leaked-secret count is 12.8 million. That's 24 per minute. The attack surfaces group into three shapes.
Shape one: engineers pasting real credentials into AI coding assistants to get help. 61% of leaks. The user is trying to debug an integration; they paste a snippet that includes a hard-coded key. The assistant processes it, caches it, maybe reflects it in the response. The key is now in the vendor's logs. 94% of these leaks involve keys that were never rotated afterward.
Shape two: system prompts with baked-in credentials. 27% of leaks. A prompt template includes a literal API key so the agent can call an internal service. When a model output ends up in a log, a ticket, or a user-visible field, the key leaks. This is the easiest class to prevent and the most common in mid-market SaaS.
Shape three: memory persistence. 12% of leaks. An agent receives a credential in a user turn, stores it in long-term memory, and later surfaces it in a completion. The scary part: memory stores are often backed by vector databases with weaker access controls than the primary DB.
Prevention is layered. Strip credential-shaped tokens at the SDK ingest. Never put real keys in system prompts — use a per-call retrieval. Treat memory stores like production databases with their own rotation and access audit.