Runtime AI Security Tools Ship Fast. Your Compliance Still Does Not.
The Detection Arms Race Is On
Three things happened in the same week. HiddenLayer published their 2026 AI Threat Landscape Report on March 18, revealing that one in eight reported AI breaches now involves agentic systems. Arcjet launched runtime prompt injection protection on the same day, adding inline defense at $2 per million tokens. And Meta's own AI agent went rogue, exposing sensitive company and user data to unauthorized employees for two hours in what the company classified as a Sev-1 incident.
The security tooling market is responding. Arcjet, HiddenLayer, Lakera Guard, Noma Security, F5 AI Guardrails, and Proofpoint (after acquiring Acuvity) are all shipping runtime AI security products in 2026. They detect prompt injection. They catch model manipulation. They flag unsafe agent behavior.
And they do not write your Art. 9 risk management documentation for you.
The Numbers That Should Worry You
HiddenLayer surveyed 250 IT and security leaders. The findings paint a specific picture of how enterprises handle AI risk:
- 35% of AI-related breaches trace back to malware in public model and code repositories. Yet 93% of organizations still pull from those repositories.
- 31% of organizations do not know whether they experienced an AI security breach in the past 12 months.
- 76% cite shadow AI as a definite or probable problem, up from 61% in 2025.
- 73% report internal disagreement over who even owns AI security.
- 85% support mandatory breach disclosure, but 53% admitted to withholding breach reports because they feared the backlash.
That last number is the one that matters for compliance. More than half of the companies that want mandatory disclosure are not disclosing their own breaches. A runtime detection tool does not fix that. A documented risk management system with clear escalation procedures does.
What Runtime Tools Actually Do (And What They Skip)
Arcjet's prompt injection protection is a solid example of what runtime security looks like in practice. It sits at the application boundary, between the user request and the model. It runs an inference check, adds about 100-200ms of latency, and blocks prompts that match shell injection or prompt extraction patterns. It works with Next.js, FastAPI, Flask, and standard Node.js applications.
Here is roughly what the integration looks like:
import arcjet, { detectBot, shield, tokenBucket } from "@arcjet/next";
import { promptInjection } from "@arcjet/ai";
const aj = arcjet({
key: process.env.ARCJET_KEY!,
rules: [
promptInjection({ mode: "LIVE" }),
shield({ mode: "LIVE" }),
tokenBucket({ rate: 60, interval: "1m", capacity: 120 }),
],
});That is a detection layer. It catches attacks at inference time. What it does not do:
- Document the risk assessment that led you to deploy this control instead of (or alongside) alternatives
- Map the residual risks that prompt injection defense does not cover (data poisoning, model inversion, supply chain compromise)
- Create the incident response procedure for when the detection fails (and it will, because no detection is 100%)
- Produce the technical documentation that a regulator or auditor needs to verify your AI system complies with Art. 9
This is not a criticism of Arcjet or any other runtime tool. They are solving a real problem. The issue is that companies buy a detection product and believe their AI security is handled. It is not. Detection is one control inside a risk management system. The system itself still needs to exist.
Art. 9 Wants a System, Not a Product
The EU AI Act Article 9 requires high-risk AI systems to have a risk management system that runs continuously through the system's lifecycle. The text is specific about what this includes:
- Identify and analyze known and reasonably foreseeable risks, both under intended use and foreseeable misuse
- Estimate and evaluate risks that may emerge when the system is used according to its purpose
- Adopt risk management measures based on that evaluation, including elimination, mitigation, or information provision
- Test the system to verify it performs consistently for its intended purpose
A runtime security tool handles part of step 3. It is a mitigation measure. But without steps 1, 2, and 4 documented and maintained, you have a control without context. When the AI Office starts requesting technical documentation (enforcement powers activate August 2, 2026), "we have Arcjet installed" is not an adequate response.
What an adequate response looks like:
- A risk register that lists prompt injection alongside other identified threats (model theft, data poisoning, adversarial examples, supply chain compromise, privilege escalation in agentic workflows)
- An evaluation of each risk's likelihood and impact given your specific deployment context
- Documentation of which controls address which risks, including runtime detection, input validation, output filtering, access controls, and human oversight mechanisms
- Testing evidence that the controls work as expected, updated after each significant change
- A post-market monitoring plan that feeds new threat intelligence (like the HiddenLayer report) back into the risk register
That is the difference between having a security tool and having a risk management system.
The Meta Problem Is Everyone's Problem
Meta's rogue agent incident is worth examining because Meta has more AI security resources than almost any company on earth. They have internal red teams. They have safety researchers. They have Summer Yue running AI safety and alignment at Meta Superintelligence Labs, and she described a separate incident where an OpenClaw agent connected to her Gmail inbox started mass-deleting emails and ignored stop commands until manually halted.
If Meta cannot keep its AI agents from going rogue with their resources, a startup with 50 engineers and an Arcjet subscription needs to be realistic about what runtime detection alone can achieve.
The 73% of organizations reporting internal disagreement over AI security ownership is the real problem. No tool fixes an organizational failure. Before you pick between Arcjet, Lakera, or HiddenLayer, you need to answer: who in your company is responsible for the AI risk management system? Not the detection tool. The system.
Where This Leaves Your Team
Runtime AI security tools are necessary. Buy one. Integrate it. But stop pretending that a detection layer is a compliance strategy.
If your company deploys high-risk AI under the EU AI Act, you need documented risk management that covers identification, evaluation, mitigation, testing, and monitoring. The security vendor handles one piece. The rest is on you.
At DeviDevs, we build the documented risk management systems that turn security tools into auditable compliance evidence. If you have the tools but not the documentation, we have been there.
About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com