CI gating for AI-generated code
At 46% AI-generated code across our customer base, human code review is no longer the bottleneck we optimize around — it's the asset we need to protect. AI-generated code carries measurably more vulnerabilities per line, but it also carries a hidden cost: it fatigues reviewers. Automated gating is the only sustainable answer.
Gate level 1: pre-merge blockers. Critical SAST findings, hard-coded secrets, CVEs in newly-added dependencies, and OWASP LLM Top 10 matches on any added prompt — these should stop the PR before a human looks at it. Guardra does this in under 90 seconds per diff.
Gate level 2: reviewer hints. Medium-severity findings appear as PR comments from the bot. The bot proposes a patch as a nested commit. 74% of those nested commits get accepted — which means 74% of medium findings never hit a human's cognitive budget.
Gate level 3: trend alarms. Repo-level metrics: secrets committed per week, unfixed criticals, eval score delta. These go to Slack and to your security dashboard. When a trend breaks, you know whose sprint to sit in on.
The measurement that matters: not 'how many findings' but 'how much reviewer time freed.' Our best customers report 30%+ reviewer capacity reclaimed in the first quarter. That capacity doesn't vanish — it goes into deeper review of the remaining 9% of risky changes.