Loading page...
Loading page...
Secure AI Agents.
General Analysis helps security teams adversarially test, monitor, and protect AI agents and systems in production.
Platform snapshot
Connected sources
GitHub, cloud, LLM providers
Discovered surface
Models, MCPs, KBs, tool schemas
Active review queue
High-risk agent paths surfaced
Featured in
Tansive
How-To
Tansive's blog builds on our research and implements a working defense against the Supabase MCP exploit using its open‑source AI‑agent runtime. The article recaps how an attacker's support‑ticket prompt tricked Cursor's AI into leaking the `integration_tokens` table, then demonstrates how Tansive enforces role‑based policies and input constraints to block such queries. Detailed examples show policies that restrict `execute_sql` capabilities, configure per‑role MCP endpoints and generate tamper‑evident audit logs.
Simon Willison
Blog Feature
AI blogger Simon Willison flags a dangerous combination he calls the “lethal trifecta” – granting an AI agent access to private SQL data, exposing it to untrusted user content and giving it a way to communicate externally. He points to our Supabase MCP attack where a support ticket contained hidden instructions telling the model to read the `integration_tokens` table and insert the secrets back into the ticket, which the agent obediently did. The post is a warning that agents with `service_role` privileges and no sense of context boundaries can be tricked into exfiltrating entire databases.
The Primeagen
Video Feature
This video walks through our Supabase MCP exploit. A malicious support‑ticket instructs Cursor’s AI assistant to `SELECT` all rows from a sensitive `integration_tokens` table and `INSERT` them back into the ticket. Because the agent runs with a full `service_role` key that bypasses Row‑Level Security, it dutifully leaks every secret token. The walkthrough shows the attack flow and explains why untrusted inputs plus over‑privileged agents equal catastrophic data leaks.
Weights & Biases
Case Study
Weights & Biases builds on our Supabase research to show how prompt‑injection attacks abuse the Model Context Protocol. It reproduces the exfiltration and then outlines layered defenses: issuing minimal‑scope credentials, using a gateway to enforce per‑table policies, running MCP servers in read‑only mode to eliminate write‑based exfiltration, filtering untrusted inputs and sandboxing model outputs. The article calls our post an outstanding piece of research and highlights why MCP servers must adopt defense‑in‑depth.
GitHub
Open Source
General Analysis’s open‑source "Jailbreak Cookbook" collects dozens of jailbreaks and prompt‑injection techniques along with unified infrastructure to run them. The blog post introducing it notes that we provide implementations for most listed jailbreaks in a single repo and supply full documentation for researchers and red‑teamers. It’s a reference library and playground for anyone building AI security tools.
Together AI
Partnership
Together AI announces a partnership with General Analysis to stress‑test open‑source language models. The post explains that GA’s programmable red‑teaming framework probes models across prompt‑injection, jailbreak and targeted‑failure scenarios, revealing concrete vulnerabilities and mitigation strategies. Running campaigns on Together’s high‑throughput inference API allows evaluations that process tens of billions of tokens, and GA’s open‑source library now natively supports Together’s endpoints.
Apideck
Security Brief
Apideck’s industry‑insights blog surveys the state of Model Context Protocol security in 2025. It highlights real‑world vulnerabilities—prompt injection, tool poisoning, over‑privileged access and token leakage—and cites exploits observed in GitHub, Supabase and other servers. The post emphasises that attackers can hijack an AI’s behaviour, exfiltrate data or trigger malicious actions through MCP connections and explains how a new OAuth 2.0‑based specification aims to tighten authorization.
Pomerium
Zero Trust
Pomerium’s write‑up dissects our Supabase MCP incident as a classic confused‑deputy problem. An LLM agent running with the full `service_role` key ingested a malicious support message and executed the embedded SQL to select every row from the `integration_tokens` table and write them back to the ticket. Because Row‑Level Security is bypassed by service keys, no permission checks stopped the leak. The article urges using least‑privilege credentials, read‑only MCP servers and gateway‑enforced policies to prevent similar breaches.
Composio
Playbook
Composio details multiple classes of MCP vulnerabilities and emphasises that simple guardrails aren’t enough. It warns that malicious tool descriptions can silently inject harmful prompts, many servers lack proper OAuth handling, supply‑chain risks are underestimated and real‑world failures like the Supabase lethal‑trifecta attack and Asana and mcp‑remote command‑injection flaws have already happened. The article encourages developers to vet third‑party tools and follow the new MCP security spec.
Max Planck
Reference
Researchers at the Max Planck Institute model red‑teaming as a function of the capability gap between attacker and target models. Their study evaluates over 500 attacker–target pairs using LLM‑based jailbreak attacks and observes that more capable models are better attackers, while attack success drops sharply once the target’s capability exceeds the attacker’s. The paper derives a scaling law predicting attack success based on this capability gap and discusses how fixed‑capability attack models may become ineffective against future models.
Oso
Blog Feature
Oso uses the Supabase exploit to explain why LLM authorization is challenging. It notes that the attack hinged on three issues: accepting untrusted input, conflating instructions with data and using an over‑privileged database account. The post argues that prompt‑injection detection is extremely hard and urges designers to narrow effective permissions and prevent AI agents from reading sensitive tables in the first place.
Alpha Insights
Perspective
Alpha Insights argues that MCP servers must default to read‑only. After the Supabase incident it notes that Supabase’s documentation now recommends read‑only mode by default and explains that 43 % of production MCP servers have command‑injection vulnerabilities. The article explains how MCP servers should act as views, not controllers: allowing only SELECT queries prevents attacks from dropping or modifying tables. It concludes that combining privileged access, untrusted input and an exfiltration channel—the lethal trifecta—creates a backdoor.
ivision Research
Security Talk
ivision’s presentation on Model Context Protocol security explains that AI context consists of system prompts, conversation history, tool calls and user messages, and that mixing these channels can expose sensitive data. It highlights Simon Willison’s lethal‑trifecta framework—access to private data, external communication and untrusted content—and uses the Supabase ticketing example where a malicious message told an `execute_sql()` tool to fetch integration keys, completing all three elements. The talk urges practitioners to test MCP servers rigorously and avoid configurations that combine the trifecta.
The CyberWire
Show Notes
Show notes for The CyberWire’s FAIK Files episode link to our technical breakdown of the iMessage Stripe exploit. In that attack, Claude was jailbroken to mint unlimited Stripe discount coupons; the podcast notes direct listeners to our General Analysis post for the full details.
Agent Systems
Every action leaves a path.
General Analysis maps those paths before attackers do.
How It Works
Connect cloud, code, docs, and agent infrastructure. We extract the full inventory of AI assets across your environment so nothing slips through the cracks.

Securely ingesting and normalizing telemetry from your tools.
Unverified MCPs, autonomous agents holding production credentials, uncensored models, over-permissive IAM roles — surfaced with concrete evidence and mapped to OWASP LLM Top 10.
Launch hundreds of adversarial simulations against any agent system, with OWASP threat tags as targets. Watch in real time what your AI can be coerced into doing.
Newsletter
Short updates on agent attacks, red-team methods, runtime guardrails, and production AI security.
Occasional updates. Unsubscribe anytime.
Our thesis
AI systems are trained for the average case. Attackers exploit the edges. AI systems are nonlinear, and fixing one vulnerability doesn't prevent the next.
Offensive models discover exploits. Forecasting models predict them. Each makes the other stronger.
Every engagement generates unique threat data. Early movers in security compound longest.