Moltbook Leak Analysis: 1.5M API Keys Exposed via Supabase Misconfiguration
- Authors

- Name
- Nino
- Occupation
- Senior Tech Editor
The launch of Moltbook felt like a fever dream in the AI community. It was marketed as a Reddit-style social network designed exclusively for AI agents. For a few days, the internet watched in fascination as autonomous bots posted, commented, and even formed strange digital subcultures—most notably, a bizarre obsession with crab memes. Humans were relegated to the sidelines, observing a million-plus agents chatting autonomously. However, the hype machine hit a brick wall when security researchers from Wiz disclosed a catastrophic vulnerability. A simple misconfiguration in their Supabase database left the entire platform’s internal state exposed to the public internet.
This wasn't a sophisticated zero-day exploit or a nation-state attack. It was a failure of the most basic security primitives in modern web development. Because Moltbook's creators leaned heavily into "vibe coding"—the practice of letting AI generate code quickly without rigorous manual review—fundamental guardrails like Row-Level Security (RLS) were completely ignored. The result? Over 1.5 million API tokens, 6,000 human email addresses, and private agent direct messages were accessible to anyone with a browser. When building complex AI infrastructure, developers often overlook the security of the underlying data layer. Using a robust LLM API aggregator like n1n.ai can help mitigate some of these risks by centralizing key management, but it cannot fix a fundamentally broken database architecture.
The Technical Failure: What Went Wrong?
Moltbook utilized Supabase, a popular open-source Firebase alternative built on PostgreSQL. Supabase is powerful because it allows developers to interact with the database directly from the client side using a publishable API key. However, this power comes with a critical requirement: you must enable Row-Level Security (RLS).
In a standard configuration, the Supabase anon key allows anyone to make requests to your database endpoint. Without RLS policies, those requests can read every row in every table. Moltbook failed to enable RLS on their primary tables. This meant that any user could open their browser console and run a simple fetch command to download the entire agents or messages table.
The Impact of the Exposure:
- 1.5 Million API Tokens: This included internal Moltbook tokens and, more dangerously, third-party credentials for providers like OpenAI and Anthropic.
- 6,000+ Human Emails: Real-world identities linked to the bot creators, creating a goldmine for phishing attacks.
- Private DMs: Conversations between agents that were supposed to be private were fully readable.
- Full Read/Write Access: Attackers didn't just have read access; they could modify agent behavior, post as other users, and effectively hijack the entire network.
How to Secure Supabase for AI Agents
If you are building an agentic platform using models like DeepSeek-V3 or Claude 3.5 Sonnet, you must ensure your backend is hardened. Here is a step-by-step implementation guide to prevent the Moltbook disaster in your own project.
1. Enable RLS on Every Table
Every time you create a table in Supabase, RLS is disabled by default. You must explicitly enable it:
ALTER TABLE agents ENABLE ROW LEVEL SECURITY;
ALTER TABLE messages ENABLE ROW LEVEL SECURITY;
2. Define Granular Policies
An enabled RLS without policies blocks all access. You need to define who can see what. For an agent social network, a user should only be able to read messages they sent or received.
CREATE POLICY "Users can view their own agents"
ON agents FOR SELECT
USING (auth.uid() = owner_id);
CREATE POLICY "Agents can read their own DMs"
ON messages FOR SELECT
USING (
auth.uid() IN (sender_id, receiver_id)
);
3. Never Store Secrets in Client-Accessible Tables
This was Moltbook's biggest sin. They stored API keys in a table that was accessible via the client-side SDK. Secrets should always be stored in a separate schema (like vault) that is never exposed to the anon or authenticated roles. When you need to call an LLM, use a server-side function (Edge Function) to retrieve the key and make the request.
To simplify this process, many enterprises are moving toward n1n.ai. By using n1n.ai, you can route all your LLM calls through a single, secure endpoint. This eliminates the need to manage dozens of individual provider keys within your own database, significantly reducing the blast radius if a leak occurs.
Comparison: Secure vs. Insecure AI Architectures
| Feature | Moltbook Approach (Insecure) | Best Practice (Secure) |
|---|---|---|
| Database Security | RLS Disabled | RLS Enabled with JWT validation |
| Key Management | Plaintext in public tables | Encrypted in private schema or via n1n.ai |
| Logic Execution | Client-side heavy | Server-side Edge Functions |
| LLM Access | Hardcoded provider keys | Centralized via API Aggregator |
| Latency | < 100ms (but unsafe) | < 150ms (with security overhead) |
The "Vibe Coding" Trap
The Moltbook incident highlights a growing trend in the AI era: "Vibe Coding." This refers to developers using tools like Cursor or GitHub Copilot to generate massive amounts of code based on high-level prompts. While this accelerates prototyping, it often skips the "boring" parts of software engineering—security, testing, and architecture.
When you ask an AI to "Build me a social network for agents on Supabase," it might generate the UI and the basic database schema, but it rarely reminds you to configure PostgreSQL policies or manage your service_role keys safely. As we move toward more autonomous agents—where bots have the agency to spend money or access RAG (Retrieval-Augmented Generation) databases—the stakes for these mistakes become existential.
Pro Tips for AI Developers
- Rotate Keys Aggressively: If you suspect a leak, rotate your keys immediately. If you are using n1n.ai, you can manage key rotation and usage limits from a single dashboard.
- Audit Your Skills: Before deploying an agent that can execute code or access the internet, audit its "skills" or tools. A malicious plugin disguised as a simple utility can exfiltrate your environment variables.
- Sanitize Prompt Inputs: Use a middleware layer to check for prompt injection. Even if your database is secure, an agent could be tricked into leaking data through its own chat interface.
- Use Environment Variables: Never, under any circumstances, commit an API key to a git repository or store it in a client-side
.envfile that gets bundled into the frontend.
The Future of Agentic Infrastructure
Moltbook's breach is a wake-up call. The "Agentic Internet" is arriving, but it is currently fragile and human-dependent. The industry needs to move away from naive implementations and toward robust, secure architectures. We are seeing the rise of specialized security layers for LLMs, including better RAG security and proxy services that provide audit logs for every agent interaction.
As you scale your AI applications, remember that speed should not come at the cost of security. Whether you are using LangChain to build complex chains or simple API calls to DeepSeek-V3, the foundation must be solid.
Get a free API key at n1n.ai