PALO ALTO, Calif., May 15, 2025 /PRNewswire/ — Pangea, a leading provider of AI security guardrails, today released findings from its global $10,000 Prompt Injection Challenge conducted in March 2025. The month-long initiative attracted more than 800 participants from 85 countries who attempted to bypass AI security guardrails across three virtual rooms with increasing levels of difficulty.
The research comes at a critical time as GenAI adoption has accelerated dramatically across industries, with a majority of enterprises now deploying AI-powered applications that interact directly with customers, employees, or sensitive internal systems. Despite this rapid adoption and integration into business-critical operations, many organizations have yet to implement AI-specific security protocols beyond frontier model defaults.
The challenge generated nearly 330,000 prompt injection attempts using more than 300 million tokens, creating a comprehensive dataset that reveals blindspots in how organizations are currently securing their AI applications.
Key Findings:
Non-Deterministic Security Challenge: Unlike traditional cybersecurity threats, prompt injection attacks exhibit unpredictable success rates due to the non-deterministic nature of LLMs. A prompt injection that fails 99 consecutive times may randomly succeed on the 100th attempt, even with identical content.Data Leakage & Reconnaissance Risk: in addition to the risk of sensitive data leakage and inappropriate responses, an AI application can also be exploited for adversarial reconnaissance purposes to reveal context like what server it’s being run on and open ports it can access.Defense in Depth Necessity: Organizations relying solely on native LLM guardrails are the most vulnerable—approximately 1 in 10 prompt injection attempts succeeded against basic system prompt guardrails. Multi-layered defenses reduced successful attacks by orders of magnitude.Agentic AI Amplifies Risk: As organizations move toward agentic AI with database and tooling access, compromised systems could enable sophisticated lateral movement within networks, dramatically elevating the potential impact of prompt injection attacks.
“This challenge has given us unprecedented visibility into real-world tactics attackers are using against AI applications today,” said Oliver Friedrichs, co-founder and CEO of Pangea. “The scale and sophistication of attacks we observed reveal the vast and rapidly evolving nature of AI security threats. Defending against these threats must be a core consideration for security teams, not a checkbox or afterthought.”
Joey Melo, a professional penetration tester and the only contestant to successfully escape all three virtual rooms, spent two days developing a multilayered attack that ultimately bypassed the single level in room three.
“Prompt injection is especially concerning when attackers can manipulate prompts to extract sensitive or proprietary information from an LLM, especially if the model has access to confidential data via RAG, plugins, or system instructions,” noted Joe Sullivan, former CSO of Cloudflare, Uber, and Facebook. “Worse, in autonomous agents or tools connected to APIs, prompt injection can result in the LLM executing unauthorized actions—such as sending emails, modifying files, or initiating financial transactions.”
In response to these findings, Pangea recommends organizations implement a comprehensive security strategy for AI applications that includes:
Multi-Layered Guardrails: Deploy guardrails to prevent prompt injection, protect the system prompt, prevent confidential information and PII exposure, and detect malicious entities using statistical and LLM-driven analysis techniques.Strategic Attack Surface Reduction: Balance functionality with security by restricting input languages, operations, and response types in security-sensitive contexts.Continuous Security Testing: Implement red team exercises specifically designed to test AI applications against evolving prompt injection techniques.Dynamic Temperature Management: Consider reducing model temperature settings in security-critical applications to minimize randomness that attackers can exploit.Dedicated Security Resources: Allocate one or more resources to track the rapidly evolving prompt injection landscape or partner with commercial security providers specialized in AI defense.
Friedrichs adds, “The industry is not paying enough attention to this risk and is underestimating its impact in many cases, playing a dangerous wait-and-see game. The rate of change and adoption in AI is astounding—moving faster than any technology transformation in the past few decades. With organizations rapidly deploying new AI capabilities and increasing their dependence on these systems for critical operations, the security gap is widening daily. The time to get ahead of these concerns is now.”
The full research report, “Defending Against Prompt Injection: Insights from 300K attacks in 30 days,” is available now, here: https://info.pangea.cloud/prompt-injection-research-report-2025
About Pangea
Pangea’s AI Guardrail Platform empowers security teams to ship secure AI applications quickly and protect workforce AI use with the industry’s most comprehensive set of AI guardrails, easily deployed via gateways or into applications with just a few lines of code. Pangea stops LLM security threats ranging from prompt injection to sensitive data leakage, covering 8 out of 10 OWASP Top Ten Risks for LLM apps, while accelerating engineering velocity and unlocking AI runtime visibility and control for security teams.
For more information, visit pangea.cloud or contact: press@pangea.cloud
Media Contact: Growth Stack Media | 415-574-0738
View original content to download multimedia:https://www.prnewswire.com/news-releases/pangea-unveils-definitive-study-on-genai-vulnerabilities-insights-from-300-000-prompt-injection-attempts-302456650.html
SOURCE Pangea Cyber