Prompt Injection Is No Longer Theoretical. Google Just Proved It.
3 Billion Pages. 32% More Attacks.
For years, prompt injection has been a conference talk topic. Researchers demonstrate clever attacks in controlled environments. Security teams nod, make a note, and go back to patching real vulnerabilities.
That era is over.
Google's Threat Intelligence team just published the first large-scale empirical study of prompt injection in the wild. They scanned 2 to 3 billion crawled web pages per month, looking for indirect prompt injection payloads hidden in websites. What they found: a 32% increase in malicious prompt injection attempts between November 2025 and February 2026.
Not research labs. Not CTF challenges. Real websites. Real payloads. Waiting for your AI agent to read them.
What These Attacks Actually Look Like
Indirect prompt injection works by embedding hidden instructions in content that AI agents consume. When an AI assistant browses a web page, summarizes a document, or processes an email, the hidden instructions get treated as commands.
Google categorized what they found into several buckets: SEO manipulation, harmless pranks, attempts to deter AI crawlers, and genuinely malicious payloads. The malicious ones are what matter.
One payload Google documented embedded a complete PayPal transaction: a PayPal.me link, a fixed $5,000 amount, and step-by-step instructions for processing the payment. It was designed for AI agents with integrated payment capabilities. Browser agents with saved credentials. Financial assistants with wallet access. Agentic tools authorized to make transactions.
The attack doesn't need to crack anything. The agent does exactly what it was authorized to do. It just receives its instructions from the wrong source. The transaction logs look identical to legitimate operations. No anomalous login. No brute force. Just an agent following instructions it found on a web page.
A second case used meta tag injection combined with a persuasion keyword to redirect financial actions toward a Stripe donation link. Different technique, same result: the AI agent moves money where the attacker wants.
Even Anthropic Got Hit
If you think your AI deployment is too small to be a target, consider this: Anthropic's own Claude.ai was vulnerable to a chained prompt injection and data exfiltration attack.
The attack, named "Claudy Day" by researchers at Oasis Security, chained three vulnerabilities:
-
Invisible prompt injection via URL parameters. Certain HTML tags could be embedded in a Claude.ai URL, invisible in the text box but fully processed by the model when the user hit Enter.
-
Data exfiltration through the Files API. Claude's sandbox restricts outbound traffic but allows connections to api.anthropic.com. By embedding an attacker-controlled API key in the hidden prompt, the researchers instructed Claude to search conversation history, write sensitive data to a file, and upload it to the attacker's account.
-
Open redirect on claude.com. This made the malicious URL look like a legitimate Anthropic link, perfect for distribution through Google Ads targeting Claude users.
A default, out-of-the-box Claude session. No custom tools, no exotic configurations. The most safety-focused AI lab in the industry, and their production system was vulnerable to a prompt injection chain.
Anthropic fixed the injection issue. But the point stands: if Anthropic's production system had this gap, what about your internal AI agents that nobody has audited?
Why This Is Getting Worse
Google's researchers made an important observation: they did not see particularly sophisticated attacks. The payloads they found were relatively basic. But the trend line is what matters.
Two forces are converging. AI systems are becoming more capable, which makes them more valuable targets. An AI agent that can browse the web, send emails, execute commands, and process payments is a much higher-value target than one that just answers questions. At the same time, attackers are beginning to automate their operations with agentic AI, bringing down the cost of attack.
More valuable targets plus cheaper attacks equals rapid escalation. The 32% increase happened over just four months. Google warns that both scale and sophistication will increase.
And the attack surface is expanding. Palo Alto Networks' Unit 42 documented similar findings: AI agents consuming untrusted web content without enforcing a strict data-instruction boundary turn every page they read into a potential attack vector.
What This Means for EU AI Act Compliance
If your organization deploys AI systems in the EU, this isn't just a security problem. It's a compliance problem.
Article 15 of the EU AI Act requires high-risk AI systems to achieve an "appropriate level of accuracy, robustness, and cybersecurity." Robustness specifically includes resilience against "attempts by unauthorised third parties to alter its use, outputs or performance by exploiting the system's vulnerabilities."
Prompt injection is exactly that: an unauthorized third party exploiting a vulnerability to alter the system's outputs and performance. If your AI system is high-risk under Annex III and it's vulnerable to prompt injection, you have a compliance gap.
Article 9 (risk management) requires identifying and analyzing "known and reasonably foreseeable risks." After Google published data showing a 32% surge in prompt injection attacks across billions of web pages, prompt injection is no longer a "reasonably foreseeable" risk. It's a documented, measured, and growing threat. Not testing for it means your risk management system is incomplete.
What to Actually Do About It
There is no silver bullet for prompt injection. Anyone selling you one is lying. But there are layers of defense that reduce the risk significantly.
Input validation at the boundary. Before your AI agent processes any external content, scan it for injection patterns. This won't catch everything, but it catches the unsophisticated attacks that make up the majority of what Google found.
Strict data-instruction separation. Your AI agent needs to distinguish between data it's reading and instructions it should follow. This is an architectural decision, not a filter.
Least privilege for AI agents. An AI assistant that summarizes web pages does not need payment credentials. An agent that drafts emails does not need file system access. The PayPal payload only works if the agent has payment capabilities.
Monitoring and logging. Article 12 of the EU AI Act requires automatic logging for high-risk AI systems. Even if you can't prevent every injection, you can detect them. Log what your agents read, what instructions they executed, and flag anomalies.
Regular security testing. Pen-test your AI systems the way you pen-test your web applications. Red team them with prompt injection payloads. Do it before an attacker does, and document the results for your compliance file.
We help companies build these layers. Not as a product, but as implementation work: auditing what your AI agents can access, testing them against known injection patterns, and building the logging and monitoring infrastructure that both security and EU AI Act compliance require.
If you're deploying AI systems in the EU and haven't tested them for prompt injection, our risk assessment quiz is a good starting point to understand where you stand.
About DeviDevs: We build ML platforms, secure AI systems, and help companies comply with the EU AI Act. devidevs.com