Blog
Research, playbooks, and honest writing.
By practitioners, for practitioners. No vendor fluff. No AI-generated filler. Just what we've actually learned operating Guardra across 900+ teams.
MITRE ATLAS for practitioners: what to instrument first
ATLAS is comprehensive but sprawling. Here are the six tactics that matter for production agents — and how to map detections to each.
Dr. Elena Markov
Chief Scientist, Guardra AI
The tool-call security playbook
Tools are execution primitives. Treat them like shell commands. Here's a 12-check list that will stop 80% of agent misuse before it happens.
Jamal Okafor
VP Engineering, Guardra AI
RAG poisoning: field notes from 38 incidents
The attack is simple: plant a document your victim's agent will retrieve. The defense is surprisingly neglected.
Ramiz Rafiq
Founder, Guardra AI
Eval-driven development: TDD for LLM apps
Write the eval first. Write the prompt second. Ship with confidence. A practical guide from teams doing it.
Dr. Elena Markov
Chief Scientist, Guardra AI
CI gating for AI-generated code
Your engineers are merging AI output at 2x human velocity. Your review process is the one from 2021. Here's how to gate.
Jamal Okafor
VP Engineering, Guardra AI
How to audit an AI agent in 2026
A practitioner's walkthrough: what to look at, in what order, and which attack classes actually matter in production.
Ramiz Rafiq
Founder, Guardra AI
The only LLM reliability metrics that matter
Faithfulness, hallucination rate, tool-call correctness, injection resilience. Everything else is vanity.
Dr. Elena Markov
Chief Scientist, Guardra AI
12.8 million secrets leaked to LLMs last year. Here's the pattern.
A year of scanning prompts, memory stores, and logs across 4.2M repos. The leaks follow three predictable shapes.
Jamal Okafor
VP Engineering, Guardra AI